00:00:00.000 Started by upstream project "autotest-per-patch" build number 132696 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.053 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:04.104 The recommended git tool is: git 00:00:04.104 using credential 00000000-0000-0000-0000-000000000002 00:00:04.106 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:04.118 Fetching changes from the remote Git repository 00:00:04.122 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:04.136 Using shallow fetch with depth 1 00:00:04.136 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:04.136 > git --version # timeout=10 00:00:04.149 > git --version # 'git version 2.39.2' 00:00:04.150 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:04.162 Setting http proxy: proxy-dmz.intel.com:911 00:00:04.162 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:10.370 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.382 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.397 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:10.397 > git config core.sparsecheckout # timeout=10 00:00:10.412 > git read-tree -mu HEAD # timeout=10 00:00:10.431 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:10.459 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:10.460 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:10.553 [Pipeline] Start of Pipeline 00:00:10.565 [Pipeline] library 00:00:10.567 Loading library shm_lib@master 00:00:10.567 Library shm_lib@master is cached. Copying from home. 00:00:10.583 [Pipeline] node 00:00:10.602 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:10.604 [Pipeline] { 00:00:10.612 [Pipeline] catchError 00:00:10.613 [Pipeline] { 00:00:10.626 [Pipeline] wrap 00:00:10.633 [Pipeline] { 00:00:10.640 [Pipeline] stage 00:00:10.642 [Pipeline] { (Prologue) 00:00:10.881 [Pipeline] sh 00:00:11.163 + logger -p user.info -t JENKINS-CI 00:00:11.179 [Pipeline] echo 00:00:11.180 Node: GP6 00:00:11.187 [Pipeline] sh 00:00:11.484 [Pipeline] setCustomBuildProperty 00:00:11.498 [Pipeline] echo 00:00:11.500 Cleanup processes 00:00:11.514 [Pipeline] sh 00:00:11.795 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.795 2041835 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.809 [Pipeline] sh 00:00:12.091 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:12.091 ++ grep -v 'sudo pgrep' 00:00:12.091 ++ awk '{print $1}' 00:00:12.091 + sudo kill -9 00:00:12.091 + true 00:00:12.120 [Pipeline] cleanWs 00:00:12.201 [WS-CLEANUP] Deleting project workspace... 00:00:12.201 [WS-CLEANUP] Deferred wipeout is used... 00:00:12.208 [WS-CLEANUP] done 00:00:12.213 [Pipeline] setCustomBuildProperty 00:00:12.230 [Pipeline] sh 00:00:12.514 + sudo git config --global --replace-all safe.directory '*' 00:00:12.586 [Pipeline] httpRequest 00:00:12.892 [Pipeline] echo 00:00:12.894 Sorcerer 10.211.164.20 is alive 00:00:12.902 [Pipeline] retry 00:00:12.904 [Pipeline] { 00:00:12.918 [Pipeline] httpRequest 00:00:12.922 HttpMethod: GET 00:00:12.923 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.923 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.929 Response Code: HTTP/1.1 200 OK 00:00:12.929 Success: Status code 200 is in the accepted range: 200,404 00:00:12.930 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:47.101 [Pipeline] } 00:00:47.118 [Pipeline] // retry 00:00:47.126 [Pipeline] sh 00:00:47.414 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:47.432 [Pipeline] httpRequest 00:00:47.992 [Pipeline] echo 00:00:47.994 Sorcerer 10.211.164.20 is alive 00:00:48.003 [Pipeline] retry 00:00:48.005 [Pipeline] { 00:00:48.020 [Pipeline] httpRequest 00:00:48.024 HttpMethod: GET 00:00:48.025 URL: http://10.211.164.20/packages/spdk_62083ef48221875f88ff616a9e98818f7374ebf3.tar.gz 00:00:48.025 Sending request to url: http://10.211.164.20/packages/spdk_62083ef48221875f88ff616a9e98818f7374ebf3.tar.gz 00:00:48.030 Response Code: HTTP/1.1 200 OK 00:00:48.031 Success: Status code 200 is in the accepted range: 200,404 00:00:48.031 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_62083ef48221875f88ff616a9e98818f7374ebf3.tar.gz 00:05:50.157 [Pipeline] } 00:05:50.177 [Pipeline] // retry 00:05:50.184 [Pipeline] sh 00:05:50.471 + tar --no-same-owner -xf spdk_62083ef48221875f88ff616a9e98818f7374ebf3.tar.gz 00:05:53.780 [Pipeline] sh 00:05:54.065 + git -C spdk log --oneline -n5 00:05:54.066 62083ef48 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata) 00:05:54.066 289f56464 lib/reduce: Support storing metadata on backing dev. (1 of 5, struct define and init process) 00:05:54.066 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:05:54.066 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:05:54.066 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:05:54.077 [Pipeline] } 00:05:54.094 [Pipeline] // stage 00:05:54.102 [Pipeline] stage 00:05:54.105 [Pipeline] { (Prepare) 00:05:54.123 [Pipeline] writeFile 00:05:54.139 [Pipeline] sh 00:05:54.424 + logger -p user.info -t JENKINS-CI 00:05:54.436 [Pipeline] sh 00:05:54.722 + logger -p user.info -t JENKINS-CI 00:05:54.736 [Pipeline] sh 00:05:55.021 + cat autorun-spdk.conf 00:05:55.021 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:55.021 SPDK_TEST_NVMF=1 00:05:55.021 SPDK_TEST_NVME_CLI=1 00:05:55.021 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:55.021 SPDK_TEST_NVMF_NICS=e810 00:05:55.021 SPDK_TEST_VFIOUSER=1 00:05:55.021 SPDK_RUN_UBSAN=1 00:05:55.021 NET_TYPE=phy 00:05:55.028 RUN_NIGHTLY=0 00:05:55.033 [Pipeline] readFile 00:05:55.100 [Pipeline] withEnv 00:05:55.101 [Pipeline] { 00:05:55.109 [Pipeline] sh 00:05:55.388 + set -ex 00:05:55.388 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:05:55.388 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:55.388 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:55.388 ++ SPDK_TEST_NVMF=1 00:05:55.388 ++ SPDK_TEST_NVME_CLI=1 00:05:55.388 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:55.388 ++ SPDK_TEST_NVMF_NICS=e810 00:05:55.388 ++ SPDK_TEST_VFIOUSER=1 00:05:55.388 ++ SPDK_RUN_UBSAN=1 00:05:55.388 ++ NET_TYPE=phy 00:05:55.388 ++ RUN_NIGHTLY=0 00:05:55.388 + case $SPDK_TEST_NVMF_NICS in 00:05:55.388 + DRIVERS=ice 00:05:55.388 + [[ tcp == \r\d\m\a ]] 00:05:55.388 + [[ -n ice ]] 00:05:55.388 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:05:55.388 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:05:55.388 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:05:55.388 rmmod: ERROR: Module irdma is not currently loaded 00:05:55.388 rmmod: ERROR: Module i40iw is not currently loaded 00:05:55.388 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:05:55.388 + true 00:05:55.388 + for D in $DRIVERS 00:05:55.388 + sudo modprobe ice 00:05:55.388 + exit 0 00:05:55.397 [Pipeline] } 00:05:55.412 [Pipeline] // withEnv 00:05:55.416 [Pipeline] } 00:05:55.429 [Pipeline] // stage 00:05:55.438 [Pipeline] catchError 00:05:55.440 [Pipeline] { 00:05:55.453 [Pipeline] timeout 00:05:55.454 Timeout set to expire in 1 hr 0 min 00:05:55.455 [Pipeline] { 00:05:55.468 [Pipeline] stage 00:05:55.470 [Pipeline] { (Tests) 00:05:55.483 [Pipeline] sh 00:05:55.767 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:55.767 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:55.767 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:55.767 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:05:55.767 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:55.767 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:55.767 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:05:55.767 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:55.767 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:55.767 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:55.767 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:05:55.767 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:55.767 + source /etc/os-release 00:05:55.767 ++ NAME='Fedora Linux' 00:05:55.767 ++ VERSION='39 (Cloud Edition)' 00:05:55.767 ++ ID=fedora 00:05:55.767 ++ VERSION_ID=39 00:05:55.767 ++ VERSION_CODENAME= 00:05:55.767 ++ PLATFORM_ID=platform:f39 00:05:55.767 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:55.767 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:55.767 ++ LOGO=fedora-logo-icon 00:05:55.767 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:55.767 ++ HOME_URL=https://fedoraproject.org/ 00:05:55.767 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:55.767 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:55.767 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:55.767 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:55.767 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:55.767 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:55.767 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:55.767 ++ SUPPORT_END=2024-11-12 00:05:55.767 ++ VARIANT='Cloud Edition' 00:05:55.767 ++ VARIANT_ID=cloud 00:05:55.767 + uname -a 00:05:55.767 Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:55.767 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:56.701 Hugepages 00:05:56.701 node hugesize free / total 00:05:56.701 node0 1048576kB 0 / 0 00:05:56.701 node0 2048kB 0 / 0 00:05:56.701 node1 1048576kB 0 / 0 00:05:56.701 node1 2048kB 0 / 0 00:05:56.701 00:05:56.701 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:56.701 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:56.959 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:56.959 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:56.959 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:56.959 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:56.959 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:56.959 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:56.959 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:56.959 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:56.959 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:56.959 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:56.959 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:56.959 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:56.959 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:56.959 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:56.959 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:56.959 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:56.959 + rm -f /tmp/spdk-ld-path 00:05:56.959 + source autorun-spdk.conf 00:05:56.959 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:56.959 ++ SPDK_TEST_NVMF=1 00:05:56.959 ++ SPDK_TEST_NVME_CLI=1 00:05:56.959 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:56.959 ++ SPDK_TEST_NVMF_NICS=e810 00:05:56.959 ++ SPDK_TEST_VFIOUSER=1 00:05:56.959 ++ SPDK_RUN_UBSAN=1 00:05:56.959 ++ NET_TYPE=phy 00:05:56.959 ++ RUN_NIGHTLY=0 00:05:56.959 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:56.959 + [[ -n '' ]] 00:05:56.959 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:56.959 + for M in /var/spdk/build-*-manifest.txt 00:05:56.959 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:56.959 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:56.959 + for M in /var/spdk/build-*-manifest.txt 00:05:56.959 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:56.959 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:56.959 + for M in /var/spdk/build-*-manifest.txt 00:05:56.959 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:56.959 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:56.959 ++ uname 00:05:56.959 + [[ Linux == \L\i\n\u\x ]] 00:05:56.959 + sudo dmesg -T 00:05:56.959 + sudo dmesg --clear 00:05:56.959 + dmesg_pid=2043790 00:05:56.959 + [[ Fedora Linux == FreeBSD ]] 00:05:56.959 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:56.959 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:56.959 + sudo dmesg -Tw 00:05:56.959 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:56.959 + [[ -x /usr/src/fio-static/fio ]] 00:05:56.959 + export FIO_BIN=/usr/src/fio-static/fio 00:05:56.959 + FIO_BIN=/usr/src/fio-static/fio 00:05:56.959 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:56.959 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:56.959 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:56.959 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:56.959 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:56.959 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:56.959 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:56.959 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:56.959 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:56.959 13:37:28 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:56.959 13:37:28 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:56.959 13:37:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:56.959 13:37:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:05:56.959 13:37:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:05:56.959 13:37:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:56.959 13:37:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:05:56.959 13:37:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:05:56.959 13:37:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:05:56.959 13:37:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:05:56.959 13:37:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:05:56.959 13:37:28 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:56.959 13:37:28 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:57.218 13:37:28 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:57.218 13:37:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.218 13:37:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:57.218 13:37:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:57.218 13:37:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.218 13:37:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.218 13:37:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.218 13:37:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.218 13:37:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.218 13:37:28 -- paths/export.sh@5 -- $ export PATH 00:05:57.218 13:37:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.218 13:37:28 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:57.218 13:37:28 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:57.218 13:37:28 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733402248.XXXXXX 00:05:57.218 13:37:28 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733402248.aHPLfF 00:05:57.218 13:37:28 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:57.218 13:37:28 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:05:57.218 13:37:28 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:05:57.218 13:37:28 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:05:57.218 13:37:28 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:05:57.218 13:37:28 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:57.218 13:37:28 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:57.218 13:37:28 -- common/autotest_common.sh@10 -- $ set +x 00:05:57.218 13:37:28 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:05:57.218 13:37:28 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:57.218 13:37:28 -- pm/common@17 -- $ local monitor 00:05:57.218 13:37:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:57.218 13:37:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:57.218 13:37:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:57.218 13:37:28 -- pm/common@21 -- $ date +%s 00:05:57.218 13:37:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:57.218 13:37:28 -- pm/common@21 -- $ date +%s 00:05:57.218 13:37:28 -- pm/common@25 -- $ sleep 1 00:05:57.218 13:37:28 -- pm/common@21 -- $ date +%s 00:05:57.218 13:37:28 -- pm/common@21 -- $ date +%s 00:05:57.218 13:37:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402248 00:05:57.218 13:37:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402248 00:05:57.218 13:37:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402248 00:05:57.218 13:37:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402248 00:05:57.218 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402248_collect-cpu-load.pm.log 00:05:57.218 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402248_collect-vmstat.pm.log 00:05:57.218 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402248_collect-cpu-temp.pm.log 00:05:57.218 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402248_collect-bmc-pm.bmc.pm.log 00:05:58.154 13:37:29 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:58.154 13:37:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:58.154 13:37:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:58.154 13:37:29 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:58.154 13:37:29 -- spdk/autobuild.sh@16 -- $ date -u 00:05:58.154 Thu Dec 5 12:37:29 PM UTC 2024 00:05:58.154 13:37:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:58.155 v25.01-pre-298-g62083ef48 00:05:58.155 13:37:29 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:58.155 13:37:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:58.155 13:37:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:58.155 13:37:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:58.155 13:37:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:58.155 13:37:29 -- common/autotest_common.sh@10 -- $ set +x 00:05:58.155 ************************************ 00:05:58.155 START TEST ubsan 00:05:58.155 ************************************ 00:05:58.155 13:37:29 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:58.155 using ubsan 00:05:58.155 00:05:58.155 real 0m0.000s 00:05:58.155 user 0m0.000s 00:05:58.155 sys 0m0.000s 00:05:58.155 13:37:29 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:58.155 13:37:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:58.155 ************************************ 00:05:58.155 END TEST ubsan 00:05:58.155 ************************************ 00:05:58.155 13:37:29 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:58.155 13:37:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:58.155 13:37:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:58.155 13:37:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:58.155 13:37:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:58.155 13:37:29 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:58.155 13:37:29 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:58.155 13:37:29 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:58.155 13:37:29 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:05:58.413 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:58.413 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:58.671 Using 'verbs' RDMA provider 00:06:09.211 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:06:19.241 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:06:19.241 Creating mk/config.mk...done. 00:06:19.241 Creating mk/cc.flags.mk...done. 00:06:19.241 Type 'make' to build. 00:06:19.241 13:37:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:06:19.241 13:37:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:19.241 13:37:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:19.241 13:37:50 -- common/autotest_common.sh@10 -- $ set +x 00:06:19.241 ************************************ 00:06:19.241 START TEST make 00:06:19.241 ************************************ 00:06:19.241 13:37:50 make -- common/autotest_common.sh@1129 -- $ make -j48 00:06:19.501 make[1]: Nothing to be done for 'all'. 00:06:21.420 The Meson build system 00:06:21.420 Version: 1.5.0 00:06:21.420 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:06:21.420 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:21.420 Build type: native build 00:06:21.420 Project name: libvfio-user 00:06:21.420 Project version: 0.0.1 00:06:21.420 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:21.420 C linker for the host machine: cc ld.bfd 2.40-14 00:06:21.420 Host machine cpu family: x86_64 00:06:21.420 Host machine cpu: x86_64 00:06:21.420 Run-time dependency threads found: YES 00:06:21.420 Library dl found: YES 00:06:21.420 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:21.420 Run-time dependency json-c found: YES 0.17 00:06:21.420 Run-time dependency cmocka found: YES 1.1.7 00:06:21.420 Program pytest-3 found: NO 00:06:21.420 Program flake8 found: NO 00:06:21.420 Program misspell-fixer found: NO 00:06:21.420 Program restructuredtext-lint found: NO 00:06:21.420 Program valgrind found: YES (/usr/bin/valgrind) 00:06:21.420 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:21.420 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:21.420 Compiler for C supports arguments -Wwrite-strings: YES 00:06:21.420 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:21.420 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:06:21.420 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:06:21.420 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:21.420 Build targets in project: 8 00:06:21.421 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:06:21.421 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:06:21.421 00:06:21.421 libvfio-user 0.0.1 00:06:21.421 00:06:21.421 User defined options 00:06:21.421 buildtype : debug 00:06:21.421 default_library: shared 00:06:21.421 libdir : /usr/local/lib 00:06:21.421 00:06:21.421 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:22.000 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:22.269 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:06:22.269 [2/37] Compiling C object samples/null.p/null.c.o 00:06:22.269 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:06:22.269 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:06:22.269 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:06:22.269 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:06:22.269 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:06:22.269 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:06:22.269 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:06:22.269 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:06:22.269 [11/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:06:22.269 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:06:22.269 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:06:22.269 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:06:22.269 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:06:22.269 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:06:22.269 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:06:22.269 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:06:22.269 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:06:22.269 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:06:22.269 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:06:22.534 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:06:22.534 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:06:22.534 [24/37] Compiling C object samples/server.p/server.c.o 00:06:22.534 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:06:22.534 [26/37] Compiling C object samples/client.p/client.c.o 00:06:22.534 [27/37] Linking target samples/client 00:06:22.534 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:06:22.534 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:06:22.534 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:06:22.534 [31/37] Linking target test/unit_tests 00:06:22.796 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:06:22.796 [33/37] Linking target samples/null 00:06:22.796 [34/37] Linking target samples/server 00:06:22.796 [35/37] Linking target samples/gpio-pci-idio-16 00:06:22.796 [36/37] Linking target samples/shadow_ioeventfd_server 00:06:22.796 [37/37] Linking target samples/lspci 00:06:22.796 INFO: autodetecting backend as ninja 00:06:22.796 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:23.061 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:24.002 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:24.002 ninja: no work to do. 00:06:28.184 The Meson build system 00:06:28.184 Version: 1.5.0 00:06:28.184 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:06:28.184 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:06:28.184 Build type: native build 00:06:28.184 Program cat found: YES (/usr/bin/cat) 00:06:28.184 Project name: DPDK 00:06:28.184 Project version: 24.03.0 00:06:28.184 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:28.184 C linker for the host machine: cc ld.bfd 2.40-14 00:06:28.184 Host machine cpu family: x86_64 00:06:28.184 Host machine cpu: x86_64 00:06:28.184 Message: ## Building in Developer Mode ## 00:06:28.184 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:28.184 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:06:28.184 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:28.184 Program python3 found: YES (/usr/bin/python3) 00:06:28.184 Program cat found: YES (/usr/bin/cat) 00:06:28.184 Compiler for C supports arguments -march=native: YES 00:06:28.184 Checking for size of "void *" : 8 00:06:28.184 Checking for size of "void *" : 8 (cached) 00:06:28.184 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:28.184 Library m found: YES 00:06:28.184 Library numa found: YES 00:06:28.184 Has header "numaif.h" : YES 00:06:28.184 Library fdt found: NO 00:06:28.184 Library execinfo found: NO 00:06:28.184 Has header "execinfo.h" : YES 00:06:28.184 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:28.184 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:28.184 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:28.184 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:28.184 Run-time dependency openssl found: YES 3.1.1 00:06:28.184 Run-time dependency libpcap found: YES 1.10.4 00:06:28.184 Has header "pcap.h" with dependency libpcap: YES 00:06:28.184 Compiler for C supports arguments -Wcast-qual: YES 00:06:28.184 Compiler for C supports arguments -Wdeprecated: YES 00:06:28.184 Compiler for C supports arguments -Wformat: YES 00:06:28.184 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:28.184 Compiler for C supports arguments -Wformat-security: NO 00:06:28.184 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:28.184 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:28.184 Compiler for C supports arguments -Wnested-externs: YES 00:06:28.184 Compiler for C supports arguments -Wold-style-definition: YES 00:06:28.184 Compiler for C supports arguments -Wpointer-arith: YES 00:06:28.184 Compiler for C supports arguments -Wsign-compare: YES 00:06:28.184 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:28.184 Compiler for C supports arguments -Wundef: YES 00:06:28.184 Compiler for C supports arguments -Wwrite-strings: YES 00:06:28.184 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:28.184 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:28.184 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:28.184 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:28.184 Program objdump found: YES (/usr/bin/objdump) 00:06:28.184 Compiler for C supports arguments -mavx512f: YES 00:06:28.184 Checking if "AVX512 checking" compiles: YES 00:06:28.184 Fetching value of define "__SSE4_2__" : 1 00:06:28.184 Fetching value of define "__AES__" : 1 00:06:28.184 Fetching value of define "__AVX__" : 1 00:06:28.184 Fetching value of define "__AVX2__" : (undefined) 00:06:28.184 Fetching value of define "__AVX512BW__" : (undefined) 00:06:28.184 Fetching value of define "__AVX512CD__" : (undefined) 00:06:28.184 Fetching value of define "__AVX512DQ__" : (undefined) 00:06:28.184 Fetching value of define "__AVX512F__" : (undefined) 00:06:28.184 Fetching value of define "__AVX512VL__" : (undefined) 00:06:28.184 Fetching value of define "__PCLMUL__" : 1 00:06:28.184 Fetching value of define "__RDRND__" : 1 00:06:28.184 Fetching value of define "__RDSEED__" : (undefined) 00:06:28.184 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:28.184 Fetching value of define "__znver1__" : (undefined) 00:06:28.184 Fetching value of define "__znver2__" : (undefined) 00:06:28.184 Fetching value of define "__znver3__" : (undefined) 00:06:28.184 Fetching value of define "__znver4__" : (undefined) 00:06:28.184 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:28.184 Message: lib/log: Defining dependency "log" 00:06:28.184 Message: lib/kvargs: Defining dependency "kvargs" 00:06:28.184 Message: lib/telemetry: Defining dependency "telemetry" 00:06:28.184 Checking for function "getentropy" : NO 00:06:28.184 Message: lib/eal: Defining dependency "eal" 00:06:28.184 Message: lib/ring: Defining dependency "ring" 00:06:28.184 Message: lib/rcu: Defining dependency "rcu" 00:06:28.184 Message: lib/mempool: Defining dependency "mempool" 00:06:28.184 Message: lib/mbuf: Defining dependency "mbuf" 00:06:28.184 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:28.184 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:06:28.184 Compiler for C supports arguments -mpclmul: YES 00:06:28.184 Compiler for C supports arguments -maes: YES 00:06:28.184 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:28.184 Compiler for C supports arguments -mavx512bw: YES 00:06:28.184 Compiler for C supports arguments -mavx512dq: YES 00:06:28.184 Compiler for C supports arguments -mavx512vl: YES 00:06:28.184 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:28.184 Compiler for C supports arguments -mavx2: YES 00:06:28.184 Compiler for C supports arguments -mavx: YES 00:06:28.184 Message: lib/net: Defining dependency "net" 00:06:28.184 Message: lib/meter: Defining dependency "meter" 00:06:28.184 Message: lib/ethdev: Defining dependency "ethdev" 00:06:28.184 Message: lib/pci: Defining dependency "pci" 00:06:28.184 Message: lib/cmdline: Defining dependency "cmdline" 00:06:28.184 Message: lib/hash: Defining dependency "hash" 00:06:28.184 Message: lib/timer: Defining dependency "timer" 00:06:28.184 Message: lib/compressdev: Defining dependency "compressdev" 00:06:28.184 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:28.184 Message: lib/dmadev: Defining dependency "dmadev" 00:06:28.184 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:28.184 Message: lib/power: Defining dependency "power" 00:06:28.184 Message: lib/reorder: Defining dependency "reorder" 00:06:28.184 Message: lib/security: Defining dependency "security" 00:06:28.184 Has header "linux/userfaultfd.h" : YES 00:06:28.184 Has header "linux/vduse.h" : YES 00:06:28.184 Message: lib/vhost: Defining dependency "vhost" 00:06:28.185 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:28.185 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:28.185 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:28.185 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:28.185 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:28.185 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:28.185 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:28.185 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:28.185 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:28.185 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:28.185 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:28.185 Configuring doxy-api-html.conf using configuration 00:06:28.185 Configuring doxy-api-man.conf using configuration 00:06:28.185 Program mandb found: YES (/usr/bin/mandb) 00:06:28.185 Program sphinx-build found: NO 00:06:28.185 Configuring rte_build_config.h using configuration 00:06:28.185 Message: 00:06:28.185 ================= 00:06:28.185 Applications Enabled 00:06:28.185 ================= 00:06:28.185 00:06:28.185 apps: 00:06:28.185 00:06:28.185 00:06:28.185 Message: 00:06:28.185 ================= 00:06:28.185 Libraries Enabled 00:06:28.185 ================= 00:06:28.185 00:06:28.185 libs: 00:06:28.185 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:28.185 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:28.185 cryptodev, dmadev, power, reorder, security, vhost, 00:06:28.185 00:06:28.185 Message: 00:06:28.185 =============== 00:06:28.185 Drivers Enabled 00:06:28.185 =============== 00:06:28.185 00:06:28.185 common: 00:06:28.185 00:06:28.185 bus: 00:06:28.185 pci, vdev, 00:06:28.185 mempool: 00:06:28.185 ring, 00:06:28.185 dma: 00:06:28.185 00:06:28.185 net: 00:06:28.185 00:06:28.185 crypto: 00:06:28.185 00:06:28.185 compress: 00:06:28.185 00:06:28.185 vdpa: 00:06:28.185 00:06:28.185 00:06:28.185 Message: 00:06:28.185 ================= 00:06:28.185 Content Skipped 00:06:28.185 ================= 00:06:28.185 00:06:28.185 apps: 00:06:28.185 dumpcap: explicitly disabled via build config 00:06:28.185 graph: explicitly disabled via build config 00:06:28.185 pdump: explicitly disabled via build config 00:06:28.185 proc-info: explicitly disabled via build config 00:06:28.185 test-acl: explicitly disabled via build config 00:06:28.185 test-bbdev: explicitly disabled via build config 00:06:28.185 test-cmdline: explicitly disabled via build config 00:06:28.185 test-compress-perf: explicitly disabled via build config 00:06:28.185 test-crypto-perf: explicitly disabled via build config 00:06:28.185 test-dma-perf: explicitly disabled via build config 00:06:28.185 test-eventdev: explicitly disabled via build config 00:06:28.185 test-fib: explicitly disabled via build config 00:06:28.185 test-flow-perf: explicitly disabled via build config 00:06:28.185 test-gpudev: explicitly disabled via build config 00:06:28.185 test-mldev: explicitly disabled via build config 00:06:28.185 test-pipeline: explicitly disabled via build config 00:06:28.185 test-pmd: explicitly disabled via build config 00:06:28.185 test-regex: explicitly disabled via build config 00:06:28.185 test-sad: explicitly disabled via build config 00:06:28.185 test-security-perf: explicitly disabled via build config 00:06:28.185 00:06:28.185 libs: 00:06:28.185 argparse: explicitly disabled via build config 00:06:28.185 metrics: explicitly disabled via build config 00:06:28.185 acl: explicitly disabled via build config 00:06:28.185 bbdev: explicitly disabled via build config 00:06:28.185 bitratestats: explicitly disabled via build config 00:06:28.185 bpf: explicitly disabled via build config 00:06:28.185 cfgfile: explicitly disabled via build config 00:06:28.185 distributor: explicitly disabled via build config 00:06:28.185 efd: explicitly disabled via build config 00:06:28.185 eventdev: explicitly disabled via build config 00:06:28.185 dispatcher: explicitly disabled via build config 00:06:28.185 gpudev: explicitly disabled via build config 00:06:28.185 gro: explicitly disabled via build config 00:06:28.185 gso: explicitly disabled via build config 00:06:28.185 ip_frag: explicitly disabled via build config 00:06:28.185 jobstats: explicitly disabled via build config 00:06:28.185 latencystats: explicitly disabled via build config 00:06:28.185 lpm: explicitly disabled via build config 00:06:28.185 member: explicitly disabled via build config 00:06:28.185 pcapng: explicitly disabled via build config 00:06:28.185 rawdev: explicitly disabled via build config 00:06:28.185 regexdev: explicitly disabled via build config 00:06:28.185 mldev: explicitly disabled via build config 00:06:28.185 rib: explicitly disabled via build config 00:06:28.185 sched: explicitly disabled via build config 00:06:28.185 stack: explicitly disabled via build config 00:06:28.185 ipsec: explicitly disabled via build config 00:06:28.185 pdcp: explicitly disabled via build config 00:06:28.185 fib: explicitly disabled via build config 00:06:28.185 port: explicitly disabled via build config 00:06:28.185 pdump: explicitly disabled via build config 00:06:28.185 table: explicitly disabled via build config 00:06:28.185 pipeline: explicitly disabled via build config 00:06:28.185 graph: explicitly disabled via build config 00:06:28.185 node: explicitly disabled via build config 00:06:28.185 00:06:28.185 drivers: 00:06:28.185 common/cpt: not in enabled drivers build config 00:06:28.185 common/dpaax: not in enabled drivers build config 00:06:28.185 common/iavf: not in enabled drivers build config 00:06:28.185 common/idpf: not in enabled drivers build config 00:06:28.185 common/ionic: not in enabled drivers build config 00:06:28.185 common/mvep: not in enabled drivers build config 00:06:28.185 common/octeontx: not in enabled drivers build config 00:06:28.185 bus/auxiliary: not in enabled drivers build config 00:06:28.185 bus/cdx: not in enabled drivers build config 00:06:28.185 bus/dpaa: not in enabled drivers build config 00:06:28.185 bus/fslmc: not in enabled drivers build config 00:06:28.185 bus/ifpga: not in enabled drivers build config 00:06:28.185 bus/platform: not in enabled drivers build config 00:06:28.185 bus/uacce: not in enabled drivers build config 00:06:28.185 bus/vmbus: not in enabled drivers build config 00:06:28.185 common/cnxk: not in enabled drivers build config 00:06:28.185 common/mlx5: not in enabled drivers build config 00:06:28.185 common/nfp: not in enabled drivers build config 00:06:28.185 common/nitrox: not in enabled drivers build config 00:06:28.185 common/qat: not in enabled drivers build config 00:06:28.185 common/sfc_efx: not in enabled drivers build config 00:06:28.185 mempool/bucket: not in enabled drivers build config 00:06:28.185 mempool/cnxk: not in enabled drivers build config 00:06:28.185 mempool/dpaa: not in enabled drivers build config 00:06:28.185 mempool/dpaa2: not in enabled drivers build config 00:06:28.185 mempool/octeontx: not in enabled drivers build config 00:06:28.185 mempool/stack: not in enabled drivers build config 00:06:28.185 dma/cnxk: not in enabled drivers build config 00:06:28.185 dma/dpaa: not in enabled drivers build config 00:06:28.185 dma/dpaa2: not in enabled drivers build config 00:06:28.185 dma/hisilicon: not in enabled drivers build config 00:06:28.185 dma/idxd: not in enabled drivers build config 00:06:28.185 dma/ioat: not in enabled drivers build config 00:06:28.186 dma/skeleton: not in enabled drivers build config 00:06:28.186 net/af_packet: not in enabled drivers build config 00:06:28.186 net/af_xdp: not in enabled drivers build config 00:06:28.186 net/ark: not in enabled drivers build config 00:06:28.186 net/atlantic: not in enabled drivers build config 00:06:28.186 net/avp: not in enabled drivers build config 00:06:28.186 net/axgbe: not in enabled drivers build config 00:06:28.186 net/bnx2x: not in enabled drivers build config 00:06:28.186 net/bnxt: not in enabled drivers build config 00:06:28.186 net/bonding: not in enabled drivers build config 00:06:28.186 net/cnxk: not in enabled drivers build config 00:06:28.186 net/cpfl: not in enabled drivers build config 00:06:28.186 net/cxgbe: not in enabled drivers build config 00:06:28.186 net/dpaa: not in enabled drivers build config 00:06:28.186 net/dpaa2: not in enabled drivers build config 00:06:28.186 net/e1000: not in enabled drivers build config 00:06:28.186 net/ena: not in enabled drivers build config 00:06:28.186 net/enetc: not in enabled drivers build config 00:06:28.186 net/enetfec: not in enabled drivers build config 00:06:28.186 net/enic: not in enabled drivers build config 00:06:28.186 net/failsafe: not in enabled drivers build config 00:06:28.186 net/fm10k: not in enabled drivers build config 00:06:28.186 net/gve: not in enabled drivers build config 00:06:28.186 net/hinic: not in enabled drivers build config 00:06:28.186 net/hns3: not in enabled drivers build config 00:06:28.186 net/i40e: not in enabled drivers build config 00:06:28.186 net/iavf: not in enabled drivers build config 00:06:28.186 net/ice: not in enabled drivers build config 00:06:28.186 net/idpf: not in enabled drivers build config 00:06:28.186 net/igc: not in enabled drivers build config 00:06:28.186 net/ionic: not in enabled drivers build config 00:06:28.186 net/ipn3ke: not in enabled drivers build config 00:06:28.186 net/ixgbe: not in enabled drivers build config 00:06:28.186 net/mana: not in enabled drivers build config 00:06:28.186 net/memif: not in enabled drivers build config 00:06:28.186 net/mlx4: not in enabled drivers build config 00:06:28.186 net/mlx5: not in enabled drivers build config 00:06:28.186 net/mvneta: not in enabled drivers build config 00:06:28.186 net/mvpp2: not in enabled drivers build config 00:06:28.186 net/netvsc: not in enabled drivers build config 00:06:28.186 net/nfb: not in enabled drivers build config 00:06:28.186 net/nfp: not in enabled drivers build config 00:06:28.186 net/ngbe: not in enabled drivers build config 00:06:28.186 net/null: not in enabled drivers build config 00:06:28.186 net/octeontx: not in enabled drivers build config 00:06:28.186 net/octeon_ep: not in enabled drivers build config 00:06:28.186 net/pcap: not in enabled drivers build config 00:06:28.186 net/pfe: not in enabled drivers build config 00:06:28.186 net/qede: not in enabled drivers build config 00:06:28.186 net/ring: not in enabled drivers build config 00:06:28.186 net/sfc: not in enabled drivers build config 00:06:28.186 net/softnic: not in enabled drivers build config 00:06:28.186 net/tap: not in enabled drivers build config 00:06:28.186 net/thunderx: not in enabled drivers build config 00:06:28.186 net/txgbe: not in enabled drivers build config 00:06:28.186 net/vdev_netvsc: not in enabled drivers build config 00:06:28.186 net/vhost: not in enabled drivers build config 00:06:28.186 net/virtio: not in enabled drivers build config 00:06:28.186 net/vmxnet3: not in enabled drivers build config 00:06:28.186 raw/*: missing internal dependency, "rawdev" 00:06:28.186 crypto/armv8: not in enabled drivers build config 00:06:28.186 crypto/bcmfs: not in enabled drivers build config 00:06:28.186 crypto/caam_jr: not in enabled drivers build config 00:06:28.186 crypto/ccp: not in enabled drivers build config 00:06:28.186 crypto/cnxk: not in enabled drivers build config 00:06:28.186 crypto/dpaa_sec: not in enabled drivers build config 00:06:28.186 crypto/dpaa2_sec: not in enabled drivers build config 00:06:28.186 crypto/ipsec_mb: not in enabled drivers build config 00:06:28.186 crypto/mlx5: not in enabled drivers build config 00:06:28.186 crypto/mvsam: not in enabled drivers build config 00:06:28.186 crypto/nitrox: not in enabled drivers build config 00:06:28.186 crypto/null: not in enabled drivers build config 00:06:28.186 crypto/octeontx: not in enabled drivers build config 00:06:28.186 crypto/openssl: not in enabled drivers build config 00:06:28.186 crypto/scheduler: not in enabled drivers build config 00:06:28.186 crypto/uadk: not in enabled drivers build config 00:06:28.186 crypto/virtio: not in enabled drivers build config 00:06:28.186 compress/isal: not in enabled drivers build config 00:06:28.186 compress/mlx5: not in enabled drivers build config 00:06:28.186 compress/nitrox: not in enabled drivers build config 00:06:28.186 compress/octeontx: not in enabled drivers build config 00:06:28.186 compress/zlib: not in enabled drivers build config 00:06:28.186 regex/*: missing internal dependency, "regexdev" 00:06:28.186 ml/*: missing internal dependency, "mldev" 00:06:28.186 vdpa/ifc: not in enabled drivers build config 00:06:28.186 vdpa/mlx5: not in enabled drivers build config 00:06:28.186 vdpa/nfp: not in enabled drivers build config 00:06:28.186 vdpa/sfc: not in enabled drivers build config 00:06:28.186 event/*: missing internal dependency, "eventdev" 00:06:28.186 baseband/*: missing internal dependency, "bbdev" 00:06:28.186 gpu/*: missing internal dependency, "gpudev" 00:06:28.186 00:06:28.186 00:06:28.751 Build targets in project: 85 00:06:28.751 00:06:28.751 DPDK 24.03.0 00:06:28.751 00:06:28.751 User defined options 00:06:28.751 buildtype : debug 00:06:28.751 default_library : shared 00:06:28.751 libdir : lib 00:06:28.751 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:28.751 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:28.751 c_link_args : 00:06:28.751 cpu_instruction_set: native 00:06:28.751 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:06:28.751 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:06:28.751 enable_docs : false 00:06:28.751 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:28.751 enable_kmods : false 00:06:28.751 max_lcores : 128 00:06:28.751 tests : false 00:06:28.751 00:06:28.751 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:29.321 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:06:29.321 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:29.321 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:29.321 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:29.321 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:29.321 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:29.321 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:29.321 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:29.321 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:29.321 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:29.321 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:29.321 [11/268] Linking static target lib/librte_kvargs.a 00:06:29.321 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:29.321 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:29.321 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:29.321 [15/268] Linking static target lib/librte_log.a 00:06:29.321 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:29.890 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.153 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:30.153 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:30.153 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:30.153 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:30.153 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:30.153 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:30.153 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:30.153 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:30.153 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:30.153 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:30.153 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:30.153 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:30.153 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:30.153 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:30.153 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:30.153 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:30.153 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:30.153 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:30.153 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:30.153 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:30.153 [38/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:30.153 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:30.153 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:30.153 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:30.153 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:30.153 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:30.153 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:30.153 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:30.153 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:30.153 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:30.153 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:30.153 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:30.153 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:30.153 [51/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:30.153 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:30.153 [53/268] Linking static target lib/librte_telemetry.a 00:06:30.153 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:30.153 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:30.413 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:30.413 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:30.413 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:30.413 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:30.413 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:30.413 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:30.413 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:30.673 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:30.673 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:30.673 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.673 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:30.673 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:30.673 [68/268] Linking target lib/librte_log.so.24.1 00:06:30.938 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:30.938 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:30.938 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:30.938 [72/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:30.938 [73/268] Linking static target lib/librte_pci.a 00:06:30.938 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:30.938 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:30.938 [76/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:30.938 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:30.938 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:30.938 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:31.197 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:31.197 [81/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:31.197 [82/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:31.197 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:31.197 [84/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:31.197 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:31.197 [86/268] Linking target lib/librte_kvargs.so.24.1 00:06:31.197 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:31.197 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:31.197 [89/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:31.197 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:31.197 [91/268] Linking static target lib/librte_meter.a 00:06:31.197 [92/268] Linking static target lib/librte_ring.a 00:06:31.197 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:31.197 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:31.197 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:31.197 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:31.197 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:31.197 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:31.197 [99/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:31.197 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:31.197 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:31.197 [102/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:31.197 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:31.197 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:31.197 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:31.197 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:31.462 [107/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.462 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:31.462 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:31.462 [110/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:31.462 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:31.462 [112/268] Linking static target lib/librte_eal.a 00:06:31.462 [113/268] Linking static target lib/librte_rcu.a 00:06:31.462 [114/268] Linking target lib/librte_telemetry.so.24.1 00:06:31.462 [115/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:31.462 [116/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:31.462 [117/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.462 [118/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:31.462 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:31.462 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:31.462 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:31.462 [122/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:31.462 [123/268] Linking static target lib/librte_mempool.a 00:06:31.462 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:31.462 [125/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:31.462 [126/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:31.724 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:31.724 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:31.724 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:31.724 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:31.724 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:31.724 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.724 [133/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.724 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:31.724 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:31.724 [136/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:31.724 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:31.985 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:31.985 [139/268] Linking static target lib/librte_net.a 00:06:31.985 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:31.985 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:31.985 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:31.985 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:31.985 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:31.985 [145/268] Linking static target lib/librte_cmdline.a 00:06:31.985 [146/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:31.985 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.251 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:32.251 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:32.251 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:32.251 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:32.251 [152/268] Linking static target lib/librte_timer.a 00:06:32.251 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:32.251 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:32.251 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:32.251 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:32.251 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:32.251 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:32.510 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:32.510 [160/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.510 [161/268] Linking static target lib/librte_dmadev.a 00:06:32.510 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:32.510 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:32.510 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:32.510 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:32.510 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:32.510 [167/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:32.510 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.510 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:32.510 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:32.510 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:32.769 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.769 [173/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:32.769 [174/268] Linking static target lib/librte_power.a 00:06:32.769 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:32.769 [176/268] Linking static target lib/librte_hash.a 00:06:32.769 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:32.769 [178/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:32.769 [179/268] Linking static target lib/librte_compressdev.a 00:06:32.769 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:32.769 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:32.769 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:32.769 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:32.769 [184/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:32.769 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:32.770 [186/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:32.770 [187/268] Linking static target lib/librte_reorder.a 00:06:32.770 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:32.770 [189/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.028 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:33.028 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.028 [192/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:33.028 [193/268] Linking static target lib/librte_mbuf.a 00:06:33.028 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:33.028 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:33.028 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:33.028 [197/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:33.028 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:33.028 [199/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:33.028 [200/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:33.028 [201/268] Linking static target drivers/librte_bus_vdev.a 00:06:33.028 [202/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.285 [203/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.285 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:33.285 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:33.286 [206/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:33.286 [207/268] Linking static target lib/librte_security.a 00:06:33.286 [208/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.286 [209/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.286 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:33.286 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:33.286 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:33.286 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:33.286 [214/268] Linking static target drivers/librte_mempool_ring.a 00:06:33.286 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:33.286 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.286 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:33.286 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:33.286 [219/268] Linking static target drivers/librte_bus_pci.a 00:06:33.286 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:33.543 [221/268] Linking static target lib/librte_ethdev.a 00:06:33.543 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.543 [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:33.543 [224/268] Linking static target lib/librte_cryptodev.a 00:06:33.543 [225/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.807 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:34.739 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:36.112 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:37.485 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:37.485 [230/268] Linking target lib/librte_eal.so.24.1 00:06:37.485 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:37.743 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:37.743 [233/268] Linking target lib/librte_pci.so.24.1 00:06:37.743 [234/268] Linking target lib/librte_ring.so.24.1 00:06:37.743 [235/268] Linking target lib/librte_meter.so.24.1 00:06:37.743 [236/268] Linking target lib/librte_timer.so.24.1 00:06:37.743 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:37.743 [238/268] Linking target lib/librte_dmadev.so.24.1 00:06:38.000 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:38.000 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:38.000 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:38.000 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:38.000 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:38.000 [244/268] Linking target lib/librte_rcu.so.24.1 00:06:38.000 [245/268] Linking target lib/librte_mempool.so.24.1 00:06:38.000 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:38.000 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:38.000 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:38.000 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:38.000 [250/268] Linking target lib/librte_mbuf.so.24.1 00:06:38.257 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:38.257 [252/268] Linking target lib/librte_reorder.so.24.1 00:06:38.257 [253/268] Linking target lib/librte_compressdev.so.24.1 00:06:38.257 [254/268] Linking target lib/librte_net.so.24.1 00:06:38.257 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:06:38.257 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:38.257 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:38.514 [258/268] Linking target lib/librte_security.so.24.1 00:06:38.514 [259/268] Linking target lib/librte_cmdline.so.24.1 00:06:38.514 [260/268] Linking target lib/librte_hash.so.24.1 00:06:38.514 [261/268] Linking target lib/librte_ethdev.so.24.1 00:06:38.514 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:38.514 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:38.514 [264/268] Linking target lib/librte_power.so.24.1 00:06:42.694 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:42.694 [266/268] Linking static target lib/librte_vhost.a 00:06:42.951 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.208 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:43.208 INFO: autodetecting backend as ninja 00:06:43.208 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:07:05.125 CC lib/ut_mock/mock.o 00:07:05.125 CC lib/ut/ut.o 00:07:05.125 CC lib/log/log.o 00:07:05.125 CC lib/log/log_flags.o 00:07:05.125 CC lib/log/log_deprecated.o 00:07:05.125 LIB libspdk_ut.a 00:07:05.125 LIB libspdk_ut_mock.a 00:07:05.125 LIB libspdk_log.a 00:07:05.125 SO libspdk_ut.so.2.0 00:07:05.125 SO libspdk_ut_mock.so.6.0 00:07:05.125 SO libspdk_log.so.7.1 00:07:05.125 SYMLINK libspdk_ut_mock.so 00:07:05.125 SYMLINK libspdk_ut.so 00:07:05.125 SYMLINK libspdk_log.so 00:07:05.125 CC lib/dma/dma.o 00:07:05.125 CXX lib/trace_parser/trace.o 00:07:05.125 CC lib/ioat/ioat.o 00:07:05.125 CC lib/util/base64.o 00:07:05.125 CC lib/util/bit_array.o 00:07:05.125 CC lib/util/cpuset.o 00:07:05.125 CC lib/util/crc16.o 00:07:05.125 CC lib/util/crc32.o 00:07:05.125 CC lib/util/crc32c.o 00:07:05.125 CC lib/util/crc32_ieee.o 00:07:05.125 CC lib/util/crc64.o 00:07:05.125 CC lib/util/dif.o 00:07:05.125 CC lib/util/fd.o 00:07:05.125 CC lib/util/fd_group.o 00:07:05.125 CC lib/util/file.o 00:07:05.125 CC lib/util/hexlify.o 00:07:05.125 CC lib/util/iov.o 00:07:05.125 CC lib/util/math.o 00:07:05.125 CC lib/util/net.o 00:07:05.125 CC lib/util/pipe.o 00:07:05.125 CC lib/util/strerror_tls.o 00:07:05.125 CC lib/util/string.o 00:07:05.125 CC lib/util/uuid.o 00:07:05.125 CC lib/util/xor.o 00:07:05.125 CC lib/util/md5.o 00:07:05.125 CC lib/util/zipf.o 00:07:05.125 CC lib/vfio_user/host/vfio_user_pci.o 00:07:05.125 CC lib/vfio_user/host/vfio_user.o 00:07:05.125 LIB libspdk_dma.a 00:07:05.125 SO libspdk_dma.so.5.0 00:07:05.125 SYMLINK libspdk_dma.so 00:07:05.125 LIB libspdk_ioat.a 00:07:05.125 SO libspdk_ioat.so.7.0 00:07:05.125 SYMLINK libspdk_ioat.so 00:07:05.125 LIB libspdk_vfio_user.a 00:07:05.125 SO libspdk_vfio_user.so.5.0 00:07:05.125 SYMLINK libspdk_vfio_user.so 00:07:05.125 LIB libspdk_util.a 00:07:05.125 SO libspdk_util.so.10.1 00:07:05.125 SYMLINK libspdk_util.so 00:07:05.125 CC lib/json/json_parse.o 00:07:05.125 CC lib/rdma_utils/rdma_utils.o 00:07:05.125 CC lib/vmd/vmd.o 00:07:05.125 CC lib/json/json_util.o 00:07:05.125 CC lib/idxd/idxd.o 00:07:05.125 CC lib/conf/conf.o 00:07:05.125 CC lib/env_dpdk/env.o 00:07:05.125 CC lib/vmd/led.o 00:07:05.125 CC lib/json/json_write.o 00:07:05.125 CC lib/idxd/idxd_user.o 00:07:05.125 CC lib/env_dpdk/memory.o 00:07:05.125 CC lib/idxd/idxd_kernel.o 00:07:05.125 CC lib/env_dpdk/pci.o 00:07:05.125 CC lib/env_dpdk/init.o 00:07:05.125 CC lib/env_dpdk/threads.o 00:07:05.125 CC lib/env_dpdk/pci_ioat.o 00:07:05.125 CC lib/env_dpdk/pci_virtio.o 00:07:05.125 CC lib/env_dpdk/pci_vmd.o 00:07:05.125 CC lib/env_dpdk/pci_idxd.o 00:07:05.125 CC lib/env_dpdk/pci_event.o 00:07:05.125 CC lib/env_dpdk/sigbus_handler.o 00:07:05.125 CC lib/env_dpdk/pci_dpdk.o 00:07:05.125 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:05.125 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:05.125 LIB libspdk_trace_parser.a 00:07:05.125 SO libspdk_trace_parser.so.6.0 00:07:05.125 SYMLINK libspdk_trace_parser.so 00:07:05.125 LIB libspdk_json.a 00:07:05.125 LIB libspdk_rdma_utils.a 00:07:05.125 LIB libspdk_conf.a 00:07:05.125 SO libspdk_rdma_utils.so.1.0 00:07:05.125 SO libspdk_json.so.6.0 00:07:05.125 SO libspdk_conf.so.6.0 00:07:05.125 SYMLINK libspdk_rdma_utils.so 00:07:05.125 SYMLINK libspdk_json.so 00:07:05.125 SYMLINK libspdk_conf.so 00:07:05.125 CC lib/rdma_provider/common.o 00:07:05.125 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:05.125 CC lib/jsonrpc/jsonrpc_server.o 00:07:05.125 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:05.125 CC lib/jsonrpc/jsonrpc_client.o 00:07:05.125 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:05.125 LIB libspdk_idxd.a 00:07:05.125 SO libspdk_idxd.so.12.1 00:07:05.125 SYMLINK libspdk_idxd.so 00:07:05.125 LIB libspdk_vmd.a 00:07:05.125 SO libspdk_vmd.so.6.0 00:07:05.125 LIB libspdk_rdma_provider.a 00:07:05.125 SYMLINK libspdk_vmd.so 00:07:05.125 SO libspdk_rdma_provider.so.7.0 00:07:05.125 LIB libspdk_jsonrpc.a 00:07:05.125 SO libspdk_jsonrpc.so.6.0 00:07:05.125 SYMLINK libspdk_rdma_provider.so 00:07:05.125 SYMLINK libspdk_jsonrpc.so 00:07:05.125 CC lib/rpc/rpc.o 00:07:05.125 LIB libspdk_rpc.a 00:07:05.125 SO libspdk_rpc.so.6.0 00:07:05.125 SYMLINK libspdk_rpc.so 00:07:05.383 CC lib/keyring/keyring.o 00:07:05.383 CC lib/trace/trace.o 00:07:05.383 CC lib/notify/notify.o 00:07:05.383 CC lib/keyring/keyring_rpc.o 00:07:05.383 CC lib/trace/trace_flags.o 00:07:05.383 CC lib/notify/notify_rpc.o 00:07:05.383 CC lib/trace/trace_rpc.o 00:07:05.642 LIB libspdk_notify.a 00:07:05.642 SO libspdk_notify.so.6.0 00:07:05.642 SYMLINK libspdk_notify.so 00:07:05.642 LIB libspdk_keyring.a 00:07:05.642 LIB libspdk_trace.a 00:07:05.642 SO libspdk_keyring.so.2.0 00:07:05.642 SO libspdk_trace.so.11.0 00:07:05.642 SYMLINK libspdk_keyring.so 00:07:05.642 SYMLINK libspdk_trace.so 00:07:05.921 CC lib/sock/sock.o 00:07:05.921 CC lib/sock/sock_rpc.o 00:07:05.921 CC lib/thread/thread.o 00:07:05.921 CC lib/thread/iobuf.o 00:07:05.921 LIB libspdk_env_dpdk.a 00:07:05.921 SO libspdk_env_dpdk.so.15.1 00:07:06.198 SYMLINK libspdk_env_dpdk.so 00:07:06.198 LIB libspdk_sock.a 00:07:06.456 SO libspdk_sock.so.10.0 00:07:06.456 SYMLINK libspdk_sock.so 00:07:06.456 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:06.456 CC lib/nvme/nvme_ctrlr.o 00:07:06.456 CC lib/nvme/nvme_fabric.o 00:07:06.456 CC lib/nvme/nvme_ns_cmd.o 00:07:06.456 CC lib/nvme/nvme_ns.o 00:07:06.456 CC lib/nvme/nvme_pcie_common.o 00:07:06.456 CC lib/nvme/nvme_pcie.o 00:07:06.456 CC lib/nvme/nvme_qpair.o 00:07:06.456 CC lib/nvme/nvme.o 00:07:06.456 CC lib/nvme/nvme_quirks.o 00:07:06.456 CC lib/nvme/nvme_transport.o 00:07:06.456 CC lib/nvme/nvme_discovery.o 00:07:06.456 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:06.456 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:06.456 CC lib/nvme/nvme_tcp.o 00:07:06.456 CC lib/nvme/nvme_opal.o 00:07:06.456 CC lib/nvme/nvme_io_msg.o 00:07:06.456 CC lib/nvme/nvme_poll_group.o 00:07:06.456 CC lib/nvme/nvme_zns.o 00:07:06.456 CC lib/nvme/nvme_stubs.o 00:07:06.456 CC lib/nvme/nvme_auth.o 00:07:06.456 CC lib/nvme/nvme_cuse.o 00:07:06.456 CC lib/nvme/nvme_vfio_user.o 00:07:06.456 CC lib/nvme/nvme_rdma.o 00:07:07.832 LIB libspdk_thread.a 00:07:07.832 SO libspdk_thread.so.11.0 00:07:07.832 SYMLINK libspdk_thread.so 00:07:07.832 CC lib/accel/accel.o 00:07:07.832 CC lib/blob/blobstore.o 00:07:07.832 CC lib/vfu_tgt/tgt_endpoint.o 00:07:07.832 CC lib/init/json_config.o 00:07:07.832 CC lib/accel/accel_rpc.o 00:07:07.832 CC lib/fsdev/fsdev.o 00:07:07.832 CC lib/blob/request.o 00:07:07.832 CC lib/virtio/virtio.o 00:07:07.832 CC lib/init/subsystem.o 00:07:07.832 CC lib/accel/accel_sw.o 00:07:07.832 CC lib/virtio/virtio_vhost_user.o 00:07:07.832 CC lib/blob/zeroes.o 00:07:07.832 CC lib/vfu_tgt/tgt_rpc.o 00:07:07.832 CC lib/init/subsystem_rpc.o 00:07:07.832 CC lib/blob/blob_bs_dev.o 00:07:07.832 CC lib/fsdev/fsdev_io.o 00:07:07.832 CC lib/virtio/virtio_vfio_user.o 00:07:07.832 CC lib/fsdev/fsdev_rpc.o 00:07:07.832 CC lib/init/rpc.o 00:07:07.832 CC lib/virtio/virtio_pci.o 00:07:08.090 LIB libspdk_init.a 00:07:08.090 SO libspdk_init.so.6.0 00:07:08.090 LIB libspdk_virtio.a 00:07:08.090 SYMLINK libspdk_init.so 00:07:08.090 SO libspdk_virtio.so.7.0 00:07:08.090 LIB libspdk_vfu_tgt.a 00:07:08.348 SYMLINK libspdk_virtio.so 00:07:08.348 SO libspdk_vfu_tgt.so.3.0 00:07:08.348 SYMLINK libspdk_vfu_tgt.so 00:07:08.348 CC lib/event/app.o 00:07:08.348 CC lib/event/reactor.o 00:07:08.348 CC lib/event/log_rpc.o 00:07:08.348 CC lib/event/app_rpc.o 00:07:08.348 CC lib/event/scheduler_static.o 00:07:08.606 LIB libspdk_fsdev.a 00:07:08.606 SO libspdk_fsdev.so.2.0 00:07:08.606 SYMLINK libspdk_fsdev.so 00:07:08.863 LIB libspdk_event.a 00:07:08.863 SO libspdk_event.so.14.0 00:07:08.863 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:08.863 SYMLINK libspdk_event.so 00:07:09.120 LIB libspdk_accel.a 00:07:09.120 SO libspdk_accel.so.16.0 00:07:09.120 LIB libspdk_nvme.a 00:07:09.120 SYMLINK libspdk_accel.so 00:07:09.120 SO libspdk_nvme.so.15.0 00:07:09.378 CC lib/bdev/bdev.o 00:07:09.378 CC lib/bdev/bdev_rpc.o 00:07:09.378 CC lib/bdev/bdev_zone.o 00:07:09.378 CC lib/bdev/part.o 00:07:09.378 CC lib/bdev/scsi_nvme.o 00:07:09.378 SYMLINK libspdk_nvme.so 00:07:09.378 LIB libspdk_fuse_dispatcher.a 00:07:09.636 SO libspdk_fuse_dispatcher.so.1.0 00:07:09.636 SYMLINK libspdk_fuse_dispatcher.so 00:07:11.009 LIB libspdk_blob.a 00:07:11.009 SO libspdk_blob.so.12.0 00:07:11.009 SYMLINK libspdk_blob.so 00:07:11.266 CC lib/blobfs/blobfs.o 00:07:11.266 CC lib/blobfs/tree.o 00:07:11.266 CC lib/lvol/lvol.o 00:07:12.204 LIB libspdk_bdev.a 00:07:12.204 LIB libspdk_blobfs.a 00:07:12.204 SO libspdk_bdev.so.17.0 00:07:12.204 SO libspdk_blobfs.so.11.0 00:07:12.204 SYMLINK libspdk_bdev.so 00:07:12.204 SYMLINK libspdk_blobfs.so 00:07:12.204 LIB libspdk_lvol.a 00:07:12.204 SO libspdk_lvol.so.11.0 00:07:12.204 SYMLINK libspdk_lvol.so 00:07:12.204 CC lib/ublk/ublk.o 00:07:12.204 CC lib/nvmf/ctrlr.o 00:07:12.204 CC lib/ublk/ublk_rpc.o 00:07:12.204 CC lib/scsi/dev.o 00:07:12.204 CC lib/ftl/ftl_core.o 00:07:12.204 CC lib/nbd/nbd.o 00:07:12.204 CC lib/nvmf/ctrlr_discovery.o 00:07:12.204 CC lib/scsi/lun.o 00:07:12.204 CC lib/ftl/ftl_init.o 00:07:12.204 CC lib/nbd/nbd_rpc.o 00:07:12.204 CC lib/nvmf/ctrlr_bdev.o 00:07:12.204 CC lib/scsi/port.o 00:07:12.204 CC lib/ftl/ftl_layout.o 00:07:12.204 CC lib/nvmf/subsystem.o 00:07:12.204 CC lib/scsi/scsi.o 00:07:12.204 CC lib/ftl/ftl_debug.o 00:07:12.204 CC lib/nvmf/nvmf.o 00:07:12.204 CC lib/scsi/scsi_bdev.o 00:07:12.204 CC lib/ftl/ftl_io.o 00:07:12.204 CC lib/nvmf/nvmf_rpc.o 00:07:12.204 CC lib/scsi/scsi_pr.o 00:07:12.204 CC lib/ftl/ftl_sb.o 00:07:12.204 CC lib/nvmf/tcp.o 00:07:12.204 CC lib/nvmf/transport.o 00:07:12.204 CC lib/scsi/scsi_rpc.o 00:07:12.204 CC lib/ftl/ftl_l2p.o 00:07:12.204 CC lib/nvmf/stubs.o 00:07:12.204 CC lib/scsi/task.o 00:07:12.204 CC lib/ftl/ftl_l2p_flat.o 00:07:12.204 CC lib/nvmf/mdns_server.o 00:07:12.204 CC lib/ftl/ftl_nv_cache.o 00:07:12.204 CC lib/nvmf/vfio_user.o 00:07:12.204 CC lib/ftl/ftl_band.o 00:07:12.204 CC lib/nvmf/rdma.o 00:07:12.204 CC lib/ftl/ftl_band_ops.o 00:07:12.204 CC lib/nvmf/auth.o 00:07:12.204 CC lib/ftl/ftl_writer.o 00:07:12.204 CC lib/ftl/ftl_reloc.o 00:07:12.204 CC lib/ftl/ftl_rq.o 00:07:12.204 CC lib/ftl/ftl_l2p_cache.o 00:07:12.204 CC lib/ftl/ftl_p2l.o 00:07:12.204 CC lib/ftl/ftl_p2l_log.o 00:07:12.204 CC lib/ftl/mngt/ftl_mngt.o 00:07:12.204 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:12.204 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:12.204 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:12.204 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:12.204 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:12.780 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:12.780 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:12.780 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:12.780 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:12.780 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:12.780 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:12.780 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:12.780 CC lib/ftl/utils/ftl_conf.o 00:07:12.780 CC lib/ftl/utils/ftl_md.o 00:07:12.780 CC lib/ftl/utils/ftl_mempool.o 00:07:12.780 CC lib/ftl/utils/ftl_bitmap.o 00:07:12.780 CC lib/ftl/utils/ftl_property.o 00:07:12.780 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:12.780 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:12.780 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:12.780 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:12.780 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:12.780 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:12.780 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:13.039 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:13.039 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:13.039 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:13.039 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:13.039 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:13.039 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:13.039 CC lib/ftl/base/ftl_base_dev.o 00:07:13.039 CC lib/ftl/base/ftl_base_bdev.o 00:07:13.039 CC lib/ftl/ftl_trace.o 00:07:13.039 LIB libspdk_nbd.a 00:07:13.039 SO libspdk_nbd.so.7.0 00:07:13.298 SYMLINK libspdk_nbd.so 00:07:13.298 LIB libspdk_scsi.a 00:07:13.298 SO libspdk_scsi.so.9.0 00:07:13.298 LIB libspdk_ublk.a 00:07:13.298 SYMLINK libspdk_scsi.so 00:07:13.298 SO libspdk_ublk.so.3.0 00:07:13.298 SYMLINK libspdk_ublk.so 00:07:13.557 CC lib/vhost/vhost.o 00:07:13.557 CC lib/iscsi/conn.o 00:07:13.557 CC lib/iscsi/init_grp.o 00:07:13.557 CC lib/vhost/vhost_rpc.o 00:07:13.557 CC lib/iscsi/iscsi.o 00:07:13.557 CC lib/vhost/vhost_scsi.o 00:07:13.557 CC lib/iscsi/param.o 00:07:13.557 CC lib/vhost/vhost_blk.o 00:07:13.557 CC lib/iscsi/portal_grp.o 00:07:13.557 CC lib/vhost/rte_vhost_user.o 00:07:13.557 CC lib/iscsi/tgt_node.o 00:07:13.557 CC lib/iscsi/iscsi_subsystem.o 00:07:13.557 CC lib/iscsi/iscsi_rpc.o 00:07:13.557 CC lib/iscsi/task.o 00:07:13.814 LIB libspdk_ftl.a 00:07:13.814 SO libspdk_ftl.so.9.0 00:07:14.070 SYMLINK libspdk_ftl.so 00:07:14.635 LIB libspdk_vhost.a 00:07:14.920 SO libspdk_vhost.so.8.0 00:07:14.920 SYMLINK libspdk_vhost.so 00:07:14.920 LIB libspdk_nvmf.a 00:07:14.920 SO libspdk_nvmf.so.20.0 00:07:14.920 LIB libspdk_iscsi.a 00:07:15.177 SO libspdk_iscsi.so.8.0 00:07:15.177 SYMLINK libspdk_nvmf.so 00:07:15.177 SYMLINK libspdk_iscsi.so 00:07:15.434 CC module/vfu_device/vfu_virtio.o 00:07:15.434 CC module/vfu_device/vfu_virtio_blk.o 00:07:15.434 CC module/env_dpdk/env_dpdk_rpc.o 00:07:15.434 CC module/vfu_device/vfu_virtio_scsi.o 00:07:15.434 CC module/vfu_device/vfu_virtio_rpc.o 00:07:15.434 CC module/vfu_device/vfu_virtio_fs.o 00:07:15.434 CC module/sock/posix/posix.o 00:07:15.434 CC module/keyring/linux/keyring.o 00:07:15.434 CC module/keyring/linux/keyring_rpc.o 00:07:15.434 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:15.434 CC module/accel/dsa/accel_dsa.o 00:07:15.434 CC module/accel/dsa/accel_dsa_rpc.o 00:07:15.434 CC module/blob/bdev/blob_bdev.o 00:07:15.434 CC module/fsdev/aio/fsdev_aio.o 00:07:15.434 CC module/accel/ioat/accel_ioat.o 00:07:15.434 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:15.434 CC module/accel/iaa/accel_iaa.o 00:07:15.434 CC module/scheduler/gscheduler/gscheduler.o 00:07:15.434 CC module/accel/ioat/accel_ioat_rpc.o 00:07:15.434 CC module/fsdev/aio/linux_aio_mgr.o 00:07:15.434 CC module/accel/iaa/accel_iaa_rpc.o 00:07:15.434 CC module/keyring/file/keyring.o 00:07:15.434 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:15.434 CC module/accel/error/accel_error.o 00:07:15.434 CC module/keyring/file/keyring_rpc.o 00:07:15.434 CC module/accel/error/accel_error_rpc.o 00:07:15.691 LIB libspdk_env_dpdk_rpc.a 00:07:15.691 SO libspdk_env_dpdk_rpc.so.6.0 00:07:15.691 SYMLINK libspdk_env_dpdk_rpc.so 00:07:15.691 LIB libspdk_scheduler_gscheduler.a 00:07:15.691 LIB libspdk_scheduler_dpdk_governor.a 00:07:15.691 SO libspdk_scheduler_gscheduler.so.4.0 00:07:15.691 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:15.691 LIB libspdk_accel_ioat.a 00:07:15.691 LIB libspdk_scheduler_dynamic.a 00:07:15.691 SO libspdk_accel_ioat.so.6.0 00:07:15.691 LIB libspdk_accel_error.a 00:07:15.691 LIB libspdk_keyring_file.a 00:07:15.691 LIB libspdk_keyring_linux.a 00:07:15.691 SYMLINK libspdk_scheduler_gscheduler.so 00:07:15.691 SO libspdk_scheduler_dynamic.so.4.0 00:07:15.691 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:15.691 SO libspdk_accel_error.so.2.0 00:07:15.691 SO libspdk_keyring_file.so.2.0 00:07:15.691 SO libspdk_keyring_linux.so.1.0 00:07:15.949 SYMLINK libspdk_accel_ioat.so 00:07:15.949 SYMLINK libspdk_scheduler_dynamic.so 00:07:15.949 LIB libspdk_blob_bdev.a 00:07:15.949 SYMLINK libspdk_accel_error.so 00:07:15.949 SYMLINK libspdk_keyring_file.so 00:07:15.949 LIB libspdk_accel_dsa.a 00:07:15.949 SYMLINK libspdk_keyring_linux.so 00:07:15.949 LIB libspdk_accel_iaa.a 00:07:15.949 SO libspdk_blob_bdev.so.12.0 00:07:15.949 SO libspdk_accel_iaa.so.3.0 00:07:15.949 SO libspdk_accel_dsa.so.5.0 00:07:15.949 SYMLINK libspdk_blob_bdev.so 00:07:15.949 SYMLINK libspdk_accel_iaa.so 00:07:15.949 SYMLINK libspdk_accel_dsa.so 00:07:16.209 LIB libspdk_vfu_device.a 00:07:16.209 SO libspdk_vfu_device.so.3.0 00:07:16.209 CC module/bdev/lvol/vbdev_lvol.o 00:07:16.209 CC module/bdev/error/vbdev_error.o 00:07:16.209 CC module/bdev/gpt/gpt.o 00:07:16.209 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:16.209 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:16.209 CC module/bdev/null/bdev_null.o 00:07:16.209 CC module/bdev/error/vbdev_error_rpc.o 00:07:16.209 CC module/bdev/null/bdev_null_rpc.o 00:07:16.209 CC module/bdev/gpt/vbdev_gpt.o 00:07:16.209 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:16.209 CC module/blobfs/bdev/blobfs_bdev.o 00:07:16.209 CC module/bdev/malloc/bdev_malloc.o 00:07:16.209 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:16.209 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:16.209 CC module/bdev/nvme/bdev_nvme.o 00:07:16.209 CC module/bdev/split/vbdev_split.o 00:07:16.209 CC module/bdev/delay/vbdev_delay.o 00:07:16.209 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:16.209 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:16.209 CC module/bdev/nvme/nvme_rpc.o 00:07:16.209 CC module/bdev/split/vbdev_split_rpc.o 00:07:16.209 CC module/bdev/nvme/bdev_mdns_client.o 00:07:16.209 CC module/bdev/ftl/bdev_ftl.o 00:07:16.209 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:16.209 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:16.209 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:16.209 CC module/bdev/nvme/vbdev_opal.o 00:07:16.209 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:16.209 CC module/bdev/aio/bdev_aio.o 00:07:16.209 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:16.209 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:16.209 CC module/bdev/aio/bdev_aio_rpc.o 00:07:16.209 CC module/bdev/iscsi/bdev_iscsi.o 00:07:16.209 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:16.209 CC module/bdev/passthru/vbdev_passthru.o 00:07:16.209 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:16.209 CC module/bdev/raid/bdev_raid.o 00:07:16.209 CC module/bdev/raid/bdev_raid_rpc.o 00:07:16.209 CC module/bdev/raid/bdev_raid_sb.o 00:07:16.209 CC module/bdev/raid/raid0.o 00:07:16.209 CC module/bdev/raid/raid1.o 00:07:16.209 CC module/bdev/raid/concat.o 00:07:16.209 SYMLINK libspdk_vfu_device.so 00:07:16.468 LIB libspdk_fsdev_aio.a 00:07:16.468 SO libspdk_fsdev_aio.so.1.0 00:07:16.468 LIB libspdk_sock_posix.a 00:07:16.468 SO libspdk_sock_posix.so.6.0 00:07:16.468 SYMLINK libspdk_fsdev_aio.so 00:07:16.468 LIB libspdk_blobfs_bdev.a 00:07:16.726 SO libspdk_blobfs_bdev.so.6.0 00:07:16.726 SYMLINK libspdk_sock_posix.so 00:07:16.726 LIB libspdk_bdev_split.a 00:07:16.726 SYMLINK libspdk_blobfs_bdev.so 00:07:16.726 LIB libspdk_bdev_malloc.a 00:07:16.726 LIB libspdk_bdev_error.a 00:07:16.726 SO libspdk_bdev_split.so.6.0 00:07:16.726 LIB libspdk_bdev_ftl.a 00:07:16.726 LIB libspdk_bdev_null.a 00:07:16.726 SO libspdk_bdev_malloc.so.6.0 00:07:16.726 SO libspdk_bdev_null.so.6.0 00:07:16.726 SO libspdk_bdev_error.so.6.0 00:07:16.726 SO libspdk_bdev_ftl.so.6.0 00:07:16.726 LIB libspdk_bdev_gpt.a 00:07:16.726 LIB libspdk_bdev_passthru.a 00:07:16.726 SYMLINK libspdk_bdev_split.so 00:07:16.726 SYMLINK libspdk_bdev_malloc.so 00:07:16.726 SO libspdk_bdev_gpt.so.6.0 00:07:16.726 SO libspdk_bdev_passthru.so.6.0 00:07:16.726 SYMLINK libspdk_bdev_null.so 00:07:16.726 SYMLINK libspdk_bdev_error.so 00:07:16.726 SYMLINK libspdk_bdev_ftl.so 00:07:16.726 LIB libspdk_bdev_zone_block.a 00:07:16.726 LIB libspdk_bdev_iscsi.a 00:07:16.726 LIB libspdk_bdev_aio.a 00:07:16.726 SO libspdk_bdev_zone_block.so.6.0 00:07:16.726 SYMLINK libspdk_bdev_gpt.so 00:07:16.726 SYMLINK libspdk_bdev_passthru.so 00:07:16.726 LIB libspdk_bdev_delay.a 00:07:16.726 SO libspdk_bdev_iscsi.so.6.0 00:07:16.726 SO libspdk_bdev_aio.so.6.0 00:07:16.726 SO libspdk_bdev_delay.so.6.0 00:07:16.984 SYMLINK libspdk_bdev_zone_block.so 00:07:16.984 SYMLINK libspdk_bdev_iscsi.so 00:07:16.984 SYMLINK libspdk_bdev_aio.so 00:07:16.984 SYMLINK libspdk_bdev_delay.so 00:07:16.984 LIB libspdk_bdev_virtio.a 00:07:16.984 LIB libspdk_bdev_lvol.a 00:07:16.984 SO libspdk_bdev_virtio.so.6.0 00:07:16.984 SO libspdk_bdev_lvol.so.6.0 00:07:16.984 SYMLINK libspdk_bdev_virtio.so 00:07:16.984 SYMLINK libspdk_bdev_lvol.so 00:07:17.549 LIB libspdk_bdev_raid.a 00:07:17.549 SO libspdk_bdev_raid.so.6.0 00:07:17.549 SYMLINK libspdk_bdev_raid.so 00:07:18.925 LIB libspdk_bdev_nvme.a 00:07:18.925 SO libspdk_bdev_nvme.so.7.1 00:07:19.183 SYMLINK libspdk_bdev_nvme.so 00:07:19.441 CC module/event/subsystems/sock/sock.o 00:07:19.441 CC module/event/subsystems/vmd/vmd.o 00:07:19.441 CC module/event/subsystems/scheduler/scheduler.o 00:07:19.441 CC module/event/subsystems/iobuf/iobuf.o 00:07:19.441 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:19.442 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:19.442 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:19.442 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:07:19.442 CC module/event/subsystems/fsdev/fsdev.o 00:07:19.442 CC module/event/subsystems/keyring/keyring.o 00:07:19.701 LIB libspdk_event_keyring.a 00:07:19.701 LIB libspdk_event_vhost_blk.a 00:07:19.701 LIB libspdk_event_fsdev.a 00:07:19.701 LIB libspdk_event_scheduler.a 00:07:19.701 LIB libspdk_event_vfu_tgt.a 00:07:19.701 LIB libspdk_event_vmd.a 00:07:19.701 LIB libspdk_event_sock.a 00:07:19.701 SO libspdk_event_keyring.so.1.0 00:07:19.701 SO libspdk_event_vhost_blk.so.3.0 00:07:19.701 LIB libspdk_event_iobuf.a 00:07:19.701 SO libspdk_event_vfu_tgt.so.3.0 00:07:19.701 SO libspdk_event_scheduler.so.4.0 00:07:19.701 SO libspdk_event_fsdev.so.1.0 00:07:19.701 SO libspdk_event_sock.so.5.0 00:07:19.701 SO libspdk_event_vmd.so.6.0 00:07:19.701 SO libspdk_event_iobuf.so.3.0 00:07:19.701 SYMLINK libspdk_event_keyring.so 00:07:19.701 SYMLINK libspdk_event_vhost_blk.so 00:07:19.701 SYMLINK libspdk_event_fsdev.so 00:07:19.701 SYMLINK libspdk_event_vfu_tgt.so 00:07:19.701 SYMLINK libspdk_event_scheduler.so 00:07:19.701 SYMLINK libspdk_event_sock.so 00:07:19.701 SYMLINK libspdk_event_vmd.so 00:07:19.701 SYMLINK libspdk_event_iobuf.so 00:07:19.960 CC module/event/subsystems/accel/accel.o 00:07:19.960 LIB libspdk_event_accel.a 00:07:19.960 SO libspdk_event_accel.so.6.0 00:07:20.218 SYMLINK libspdk_event_accel.so 00:07:20.218 CC module/event/subsystems/bdev/bdev.o 00:07:20.477 LIB libspdk_event_bdev.a 00:07:20.477 SO libspdk_event_bdev.so.6.0 00:07:20.477 SYMLINK libspdk_event_bdev.so 00:07:20.736 CC module/event/subsystems/scsi/scsi.o 00:07:20.736 CC module/event/subsystems/nbd/nbd.o 00:07:20.736 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:20.736 CC module/event/subsystems/ublk/ublk.o 00:07:20.736 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:20.736 LIB libspdk_event_nbd.a 00:07:20.736 LIB libspdk_event_ublk.a 00:07:20.736 LIB libspdk_event_scsi.a 00:07:20.736 SO libspdk_event_nbd.so.6.0 00:07:20.736 SO libspdk_event_ublk.so.3.0 00:07:20.994 SO libspdk_event_scsi.so.6.0 00:07:20.994 SYMLINK libspdk_event_ublk.so 00:07:20.994 SYMLINK libspdk_event_nbd.so 00:07:20.994 SYMLINK libspdk_event_scsi.so 00:07:20.994 LIB libspdk_event_nvmf.a 00:07:20.994 SO libspdk_event_nvmf.so.6.0 00:07:20.994 SYMLINK libspdk_event_nvmf.so 00:07:20.994 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:20.994 CC module/event/subsystems/iscsi/iscsi.o 00:07:21.254 LIB libspdk_event_vhost_scsi.a 00:07:21.254 SO libspdk_event_vhost_scsi.so.3.0 00:07:21.254 LIB libspdk_event_iscsi.a 00:07:21.254 SO libspdk_event_iscsi.so.6.0 00:07:21.254 SYMLINK libspdk_event_vhost_scsi.so 00:07:21.254 SYMLINK libspdk_event_iscsi.so 00:07:21.512 SO libspdk.so.6.0 00:07:21.512 SYMLINK libspdk.so 00:07:21.774 TEST_HEADER include/spdk/accel.h 00:07:21.774 TEST_HEADER include/spdk/accel_module.h 00:07:21.774 TEST_HEADER include/spdk/assert.h 00:07:21.774 TEST_HEADER include/spdk/barrier.h 00:07:21.774 TEST_HEADER include/spdk/bdev.h 00:07:21.774 CC app/spdk_nvme_discover/discovery_aer.o 00:07:21.774 TEST_HEADER include/spdk/base64.h 00:07:21.774 CC app/spdk_lspci/spdk_lspci.o 00:07:21.774 TEST_HEADER include/spdk/bdev_module.h 00:07:21.774 CXX app/trace/trace.o 00:07:21.774 TEST_HEADER include/spdk/bdev_zone.h 00:07:21.774 CC app/spdk_nvme_perf/perf.o 00:07:21.774 TEST_HEADER include/spdk/bit_array.h 00:07:21.774 CC app/spdk_top/spdk_top.o 00:07:21.774 TEST_HEADER include/spdk/bit_pool.h 00:07:21.774 CC test/rpc_client/rpc_client_test.o 00:07:21.774 CC app/spdk_nvme_identify/identify.o 00:07:21.774 TEST_HEADER include/spdk/blob_bdev.h 00:07:21.774 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:21.774 TEST_HEADER include/spdk/blobfs.h 00:07:21.774 TEST_HEADER include/spdk/blob.h 00:07:21.774 CC app/trace_record/trace_record.o 00:07:21.774 TEST_HEADER include/spdk/conf.h 00:07:21.774 TEST_HEADER include/spdk/config.h 00:07:21.774 TEST_HEADER include/spdk/cpuset.h 00:07:21.774 TEST_HEADER include/spdk/crc16.h 00:07:21.774 TEST_HEADER include/spdk/crc32.h 00:07:21.774 TEST_HEADER include/spdk/crc64.h 00:07:21.774 TEST_HEADER include/spdk/dif.h 00:07:21.774 TEST_HEADER include/spdk/dma.h 00:07:21.774 TEST_HEADER include/spdk/endian.h 00:07:21.774 TEST_HEADER include/spdk/env_dpdk.h 00:07:21.774 TEST_HEADER include/spdk/env.h 00:07:21.774 TEST_HEADER include/spdk/event.h 00:07:21.774 TEST_HEADER include/spdk/fd_group.h 00:07:21.774 TEST_HEADER include/spdk/fd.h 00:07:21.774 TEST_HEADER include/spdk/file.h 00:07:21.774 TEST_HEADER include/spdk/fsdev.h 00:07:21.774 TEST_HEADER include/spdk/fsdev_module.h 00:07:21.774 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:21.774 TEST_HEADER include/spdk/ftl.h 00:07:21.774 TEST_HEADER include/spdk/gpt_spec.h 00:07:21.774 TEST_HEADER include/spdk/hexlify.h 00:07:21.774 TEST_HEADER include/spdk/histogram_data.h 00:07:21.774 TEST_HEADER include/spdk/idxd.h 00:07:21.774 TEST_HEADER include/spdk/idxd_spec.h 00:07:21.774 TEST_HEADER include/spdk/init.h 00:07:21.774 TEST_HEADER include/spdk/ioat_spec.h 00:07:21.774 TEST_HEADER include/spdk/ioat.h 00:07:21.774 TEST_HEADER include/spdk/iscsi_spec.h 00:07:21.774 TEST_HEADER include/spdk/json.h 00:07:21.774 TEST_HEADER include/spdk/jsonrpc.h 00:07:21.774 TEST_HEADER include/spdk/keyring.h 00:07:21.774 TEST_HEADER include/spdk/keyring_module.h 00:07:21.774 TEST_HEADER include/spdk/likely.h 00:07:21.774 TEST_HEADER include/spdk/log.h 00:07:21.774 TEST_HEADER include/spdk/md5.h 00:07:21.774 TEST_HEADER include/spdk/memory.h 00:07:21.774 TEST_HEADER include/spdk/lvol.h 00:07:21.774 TEST_HEADER include/spdk/mmio.h 00:07:21.774 TEST_HEADER include/spdk/net.h 00:07:21.774 TEST_HEADER include/spdk/nbd.h 00:07:21.774 TEST_HEADER include/spdk/nvme.h 00:07:21.774 TEST_HEADER include/spdk/notify.h 00:07:21.774 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:21.774 TEST_HEADER include/spdk/nvme_intel.h 00:07:21.774 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:21.774 TEST_HEADER include/spdk/nvme_spec.h 00:07:21.774 TEST_HEADER include/spdk/nvme_zns.h 00:07:21.774 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:21.774 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:21.774 TEST_HEADER include/spdk/nvmf.h 00:07:21.774 TEST_HEADER include/spdk/nvmf_spec.h 00:07:21.774 TEST_HEADER include/spdk/nvmf_transport.h 00:07:21.774 TEST_HEADER include/spdk/opal.h 00:07:21.774 TEST_HEADER include/spdk/opal_spec.h 00:07:21.774 TEST_HEADER include/spdk/pci_ids.h 00:07:21.774 TEST_HEADER include/spdk/pipe.h 00:07:21.774 TEST_HEADER include/spdk/queue.h 00:07:21.774 TEST_HEADER include/spdk/reduce.h 00:07:21.774 TEST_HEADER include/spdk/rpc.h 00:07:21.774 TEST_HEADER include/spdk/scheduler.h 00:07:21.774 TEST_HEADER include/spdk/scsi.h 00:07:21.774 TEST_HEADER include/spdk/scsi_spec.h 00:07:21.774 TEST_HEADER include/spdk/sock.h 00:07:21.774 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:21.774 TEST_HEADER include/spdk/stdinc.h 00:07:21.774 TEST_HEADER include/spdk/string.h 00:07:21.774 TEST_HEADER include/spdk/trace.h 00:07:21.774 TEST_HEADER include/spdk/thread.h 00:07:21.774 TEST_HEADER include/spdk/trace_parser.h 00:07:21.774 TEST_HEADER include/spdk/tree.h 00:07:21.774 TEST_HEADER include/spdk/ublk.h 00:07:21.774 TEST_HEADER include/spdk/util.h 00:07:21.774 TEST_HEADER include/spdk/uuid.h 00:07:21.774 TEST_HEADER include/spdk/version.h 00:07:21.774 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:21.774 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:21.774 TEST_HEADER include/spdk/vhost.h 00:07:21.774 TEST_HEADER include/spdk/vmd.h 00:07:21.774 TEST_HEADER include/spdk/xor.h 00:07:21.774 TEST_HEADER include/spdk/zipf.h 00:07:21.774 CXX test/cpp_headers/accel.o 00:07:21.774 CXX test/cpp_headers/accel_module.o 00:07:21.774 CXX test/cpp_headers/assert.o 00:07:21.774 CXX test/cpp_headers/barrier.o 00:07:21.774 CXX test/cpp_headers/base64.o 00:07:21.774 CXX test/cpp_headers/bdev.o 00:07:21.774 CXX test/cpp_headers/bdev_module.o 00:07:21.774 CXX test/cpp_headers/bdev_zone.o 00:07:21.774 CXX test/cpp_headers/bit_array.o 00:07:21.774 CXX test/cpp_headers/bit_pool.o 00:07:21.774 CXX test/cpp_headers/blob_bdev.o 00:07:21.774 CXX test/cpp_headers/blobfs_bdev.o 00:07:21.774 CXX test/cpp_headers/blobfs.o 00:07:21.774 CXX test/cpp_headers/blob.o 00:07:21.774 CXX test/cpp_headers/conf.o 00:07:21.774 CXX test/cpp_headers/config.o 00:07:21.774 CXX test/cpp_headers/cpuset.o 00:07:21.774 CXX test/cpp_headers/crc16.o 00:07:21.774 CC app/spdk_dd/spdk_dd.o 00:07:21.774 CC app/iscsi_tgt/iscsi_tgt.o 00:07:21.774 CC app/nvmf_tgt/nvmf_main.o 00:07:21.774 CXX test/cpp_headers/crc32.o 00:07:21.774 CC app/spdk_tgt/spdk_tgt.o 00:07:21.774 CC examples/ioat/verify/verify.o 00:07:21.774 CC test/thread/poller_perf/poller_perf.o 00:07:21.774 CC test/env/vtophys/vtophys.o 00:07:21.774 CC test/env/memory/memory_ut.o 00:07:21.774 CC test/app/stub/stub.o 00:07:21.774 CC examples/ioat/perf/perf.o 00:07:21.774 CC examples/util/zipf/zipf.o 00:07:21.774 CC test/app/histogram_perf/histogram_perf.o 00:07:21.774 CC test/app/jsoncat/jsoncat.o 00:07:21.774 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:21.774 CC app/fio/nvme/fio_plugin.o 00:07:21.774 CC test/env/pci/pci_ut.o 00:07:21.774 CC test/dma/test_dma/test_dma.o 00:07:21.774 CC test/app/bdev_svc/bdev_svc.o 00:07:22.036 CC app/fio/bdev/fio_plugin.o 00:07:22.036 LINK spdk_lspci 00:07:22.036 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:22.036 CC test/env/mem_callbacks/mem_callbacks.o 00:07:22.036 LINK rpc_client_test 00:07:22.036 LINK spdk_nvme_discover 00:07:22.036 LINK interrupt_tgt 00:07:22.036 LINK poller_perf 00:07:22.036 CXX test/cpp_headers/crc64.o 00:07:22.036 LINK vtophys 00:07:22.036 CXX test/cpp_headers/dif.o 00:07:22.036 CXX test/cpp_headers/dma.o 00:07:22.036 LINK histogram_perf 00:07:22.036 LINK jsoncat 00:07:22.301 LINK zipf 00:07:22.301 CXX test/cpp_headers/endian.o 00:07:22.301 CXX test/cpp_headers/env_dpdk.o 00:07:22.301 LINK stub 00:07:22.301 CXX test/cpp_headers/env.o 00:07:22.301 CXX test/cpp_headers/event.o 00:07:22.301 LINK nvmf_tgt 00:07:22.301 LINK env_dpdk_post_init 00:07:22.301 LINK spdk_trace_record 00:07:22.301 LINK iscsi_tgt 00:07:22.301 CXX test/cpp_headers/fd_group.o 00:07:22.301 CXX test/cpp_headers/fd.o 00:07:22.301 CXX test/cpp_headers/file.o 00:07:22.301 CXX test/cpp_headers/fsdev.o 00:07:22.301 CXX test/cpp_headers/fsdev_module.o 00:07:22.301 CXX test/cpp_headers/ftl.o 00:07:22.301 CXX test/cpp_headers/fuse_dispatcher.o 00:07:22.301 LINK ioat_perf 00:07:22.301 LINK spdk_tgt 00:07:22.301 CXX test/cpp_headers/gpt_spec.o 00:07:22.301 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:22.301 LINK verify 00:07:22.302 LINK bdev_svc 00:07:22.302 CXX test/cpp_headers/hexlify.o 00:07:22.302 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:22.302 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:22.302 CXX test/cpp_headers/histogram_data.o 00:07:22.569 CXX test/cpp_headers/idxd.o 00:07:22.569 CXX test/cpp_headers/idxd_spec.o 00:07:22.569 CXX test/cpp_headers/init.o 00:07:22.569 CXX test/cpp_headers/ioat.o 00:07:22.569 CXX test/cpp_headers/ioat_spec.o 00:07:22.569 CXX test/cpp_headers/iscsi_spec.o 00:07:22.569 LINK spdk_dd 00:07:22.569 CXX test/cpp_headers/json.o 00:07:22.569 LINK spdk_trace 00:07:22.569 CXX test/cpp_headers/jsonrpc.o 00:07:22.569 CXX test/cpp_headers/keyring.o 00:07:22.569 CXX test/cpp_headers/keyring_module.o 00:07:22.569 CXX test/cpp_headers/likely.o 00:07:22.569 CXX test/cpp_headers/log.o 00:07:22.569 CXX test/cpp_headers/lvol.o 00:07:22.569 CXX test/cpp_headers/md5.o 00:07:22.569 CXX test/cpp_headers/memory.o 00:07:22.569 LINK pci_ut 00:07:22.569 CXX test/cpp_headers/mmio.o 00:07:22.569 CXX test/cpp_headers/nbd.o 00:07:22.569 CXX test/cpp_headers/net.o 00:07:22.569 CXX test/cpp_headers/notify.o 00:07:22.569 CXX test/cpp_headers/nvme.o 00:07:22.569 CXX test/cpp_headers/nvme_intel.o 00:07:22.569 CXX test/cpp_headers/nvme_ocssd.o 00:07:22.830 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:22.830 CXX test/cpp_headers/nvme_spec.o 00:07:22.830 CXX test/cpp_headers/nvme_zns.o 00:07:22.830 CC test/event/event_perf/event_perf.o 00:07:22.830 LINK nvme_fuzz 00:07:22.830 CC test/event/reactor/reactor.o 00:07:22.830 CXX test/cpp_headers/nvmf_cmd.o 00:07:22.830 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:22.830 CC test/event/reactor_perf/reactor_perf.o 00:07:22.830 CXX test/cpp_headers/nvmf.o 00:07:22.830 CXX test/cpp_headers/nvmf_spec.o 00:07:22.830 CC test/event/app_repeat/app_repeat.o 00:07:22.830 LINK test_dma 00:07:22.830 CC examples/sock/hello_world/hello_sock.o 00:07:22.830 CXX test/cpp_headers/nvmf_transport.o 00:07:22.830 CXX test/cpp_headers/opal.o 00:07:22.830 CC examples/vmd/lsvmd/lsvmd.o 00:07:22.830 CC examples/vmd/led/led.o 00:07:22.830 CC examples/idxd/perf/perf.o 00:07:22.830 CXX test/cpp_headers/opal_spec.o 00:07:22.830 CXX test/cpp_headers/pci_ids.o 00:07:22.830 CC test/event/scheduler/scheduler.o 00:07:23.094 LINK spdk_nvme 00:07:23.094 CXX test/cpp_headers/pipe.o 00:07:23.094 CC examples/thread/thread/thread_ex.o 00:07:23.094 CXX test/cpp_headers/queue.o 00:07:23.094 CXX test/cpp_headers/reduce.o 00:07:23.094 LINK spdk_bdev 00:07:23.094 CXX test/cpp_headers/rpc.o 00:07:23.094 CXX test/cpp_headers/scheduler.o 00:07:23.094 CXX test/cpp_headers/scsi.o 00:07:23.094 CXX test/cpp_headers/scsi_spec.o 00:07:23.094 CXX test/cpp_headers/sock.o 00:07:23.094 CXX test/cpp_headers/stdinc.o 00:07:23.094 CXX test/cpp_headers/string.o 00:07:23.094 CXX test/cpp_headers/thread.o 00:07:23.094 CXX test/cpp_headers/trace.o 00:07:23.094 CXX test/cpp_headers/trace_parser.o 00:07:23.094 CXX test/cpp_headers/tree.o 00:07:23.094 LINK event_perf 00:07:23.094 CXX test/cpp_headers/ublk.o 00:07:23.094 LINK reactor 00:07:23.094 CXX test/cpp_headers/util.o 00:07:23.094 CXX test/cpp_headers/uuid.o 00:07:23.094 LINK reactor_perf 00:07:23.094 CXX test/cpp_headers/version.o 00:07:23.094 CXX test/cpp_headers/vfio_user_pci.o 00:07:23.094 LINK vhost_fuzz 00:07:23.094 CXX test/cpp_headers/vfio_user_spec.o 00:07:23.355 LINK app_repeat 00:07:23.355 CC app/vhost/vhost.o 00:07:23.355 CXX test/cpp_headers/vhost.o 00:07:23.355 CXX test/cpp_headers/vmd.o 00:07:23.355 LINK lsvmd 00:07:23.355 CXX test/cpp_headers/xor.o 00:07:23.355 LINK led 00:07:23.355 CXX test/cpp_headers/zipf.o 00:07:23.355 LINK spdk_nvme_perf 00:07:23.355 LINK mem_callbacks 00:07:23.355 LINK spdk_nvme_identify 00:07:23.355 LINK hello_sock 00:07:23.355 LINK spdk_top 00:07:23.355 LINK scheduler 00:07:23.615 LINK thread 00:07:23.615 CC test/nvme/aer/aer.o 00:07:23.615 CC test/nvme/reserve/reserve.o 00:07:23.615 CC test/nvme/err_injection/err_injection.o 00:07:23.615 CC test/nvme/overhead/overhead.o 00:07:23.615 CC test/nvme/reset/reset.o 00:07:23.615 CC test/nvme/e2edp/nvme_dp.o 00:07:23.615 CC test/nvme/simple_copy/simple_copy.o 00:07:23.615 CC test/nvme/fused_ordering/fused_ordering.o 00:07:23.615 CC test/nvme/sgl/sgl.o 00:07:23.615 CC test/nvme/boot_partition/boot_partition.o 00:07:23.615 CC test/nvme/startup/startup.o 00:07:23.615 CC test/nvme/compliance/nvme_compliance.o 00:07:23.615 LINK idxd_perf 00:07:23.615 CC test/nvme/connect_stress/connect_stress.o 00:07:23.615 CC test/nvme/fdp/fdp.o 00:07:23.615 CC test/nvme/cuse/cuse.o 00:07:23.615 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:23.615 CC test/accel/dif/dif.o 00:07:23.615 CC test/blobfs/mkfs/mkfs.o 00:07:23.615 LINK vhost 00:07:23.615 CC test/lvol/esnap/esnap.o 00:07:23.873 LINK boot_partition 00:07:23.873 LINK connect_stress 00:07:23.873 LINK fused_ordering 00:07:23.873 LINK doorbell_aers 00:07:23.873 LINK mkfs 00:07:23.873 LINK startup 00:07:23.873 CC examples/nvme/hotplug/hotplug.o 00:07:23.873 LINK sgl 00:07:23.873 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:23.873 CC examples/nvme/reconnect/reconnect.o 00:07:23.873 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:23.873 CC examples/nvme/arbitration/arbitration.o 00:07:23.873 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:23.873 CC examples/nvme/abort/abort.o 00:07:23.873 CC examples/nvme/hello_world/hello_world.o 00:07:23.873 LINK overhead 00:07:23.873 LINK aer 00:07:23.873 LINK memory_ut 00:07:23.873 LINK nvme_dp 00:07:23.873 LINK err_injection 00:07:23.873 LINK reset 00:07:23.873 CC examples/accel/perf/accel_perf.o 00:07:23.873 LINK reserve 00:07:24.131 LINK nvme_compliance 00:07:24.131 LINK fdp 00:07:24.131 CC examples/blob/hello_world/hello_blob.o 00:07:24.131 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:24.131 CC examples/blob/cli/blobcli.o 00:07:24.131 LINK simple_copy 00:07:24.131 LINK pmr_persistence 00:07:24.131 LINK hello_world 00:07:24.131 LINK hotplug 00:07:24.131 LINK cmb_copy 00:07:24.388 LINK hello_blob 00:07:24.388 LINK arbitration 00:07:24.388 LINK hello_fsdev 00:07:24.388 LINK dif 00:07:24.388 LINK reconnect 00:07:24.388 LINK abort 00:07:24.647 LINK nvme_manage 00:07:24.647 LINK blobcli 00:07:24.647 LINK accel_perf 00:07:24.647 LINK iscsi_fuzz 00:07:24.905 CC test/bdev/bdevio/bdevio.o 00:07:24.905 CC examples/bdev/hello_world/hello_bdev.o 00:07:24.905 CC examples/bdev/bdevperf/bdevperf.o 00:07:25.163 LINK bdevio 00:07:25.163 LINK hello_bdev 00:07:25.421 LINK cuse 00:07:25.986 LINK bdevperf 00:07:26.245 CC examples/nvmf/nvmf/nvmf.o 00:07:26.508 LINK nvmf 00:07:29.044 LINK esnap 00:07:29.302 00:07:29.302 real 1m10.171s 00:07:29.302 user 11m53.983s 00:07:29.302 sys 2m35.787s 00:07:29.302 13:39:00 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:29.302 13:39:00 make -- common/autotest_common.sh@10 -- $ set +x 00:07:29.302 ************************************ 00:07:29.302 END TEST make 00:07:29.302 ************************************ 00:07:29.302 13:39:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:29.302 13:39:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:29.302 13:39:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:29.302 13:39:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.302 13:39:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:29.302 13:39:00 -- pm/common@44 -- $ pid=2043832 00:07:29.302 13:39:00 -- pm/common@50 -- $ kill -TERM 2043832 00:07:29.302 13:39:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.302 13:39:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:29.302 13:39:00 -- pm/common@44 -- $ pid=2043834 00:07:29.302 13:39:00 -- pm/common@50 -- $ kill -TERM 2043834 00:07:29.302 13:39:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.302 13:39:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:29.302 13:39:00 -- pm/common@44 -- $ pid=2043835 00:07:29.302 13:39:00 -- pm/common@50 -- $ kill -TERM 2043835 00:07:29.302 13:39:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.302 13:39:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:29.302 13:39:00 -- pm/common@44 -- $ pid=2043866 00:07:29.302 13:39:00 -- pm/common@50 -- $ sudo -E kill -TERM 2043866 00:07:29.302 13:39:00 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:29.302 13:39:00 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:29.302 13:39:00 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:29.302 13:39:00 -- common/autotest_common.sh@1711 -- # lcov --version 00:07:29.302 13:39:00 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:29.560 13:39:00 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:29.560 13:39:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.560 13:39:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.560 13:39:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.560 13:39:00 -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.560 13:39:00 -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.560 13:39:00 -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.560 13:39:00 -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.560 13:39:00 -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.560 13:39:00 -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.560 13:39:00 -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.560 13:39:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.560 13:39:00 -- scripts/common.sh@344 -- # case "$op" in 00:07:29.560 13:39:00 -- scripts/common.sh@345 -- # : 1 00:07:29.560 13:39:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.560 13:39:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.560 13:39:00 -- scripts/common.sh@365 -- # decimal 1 00:07:29.560 13:39:00 -- scripts/common.sh@353 -- # local d=1 00:07:29.560 13:39:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.560 13:39:00 -- scripts/common.sh@355 -- # echo 1 00:07:29.560 13:39:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.560 13:39:00 -- scripts/common.sh@366 -- # decimal 2 00:07:29.560 13:39:00 -- scripts/common.sh@353 -- # local d=2 00:07:29.560 13:39:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.560 13:39:00 -- scripts/common.sh@355 -- # echo 2 00:07:29.560 13:39:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.560 13:39:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.560 13:39:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.560 13:39:00 -- scripts/common.sh@368 -- # return 0 00:07:29.560 13:39:00 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.560 13:39:00 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:29.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.560 --rc genhtml_branch_coverage=1 00:07:29.560 --rc genhtml_function_coverage=1 00:07:29.560 --rc genhtml_legend=1 00:07:29.560 --rc geninfo_all_blocks=1 00:07:29.560 --rc geninfo_unexecuted_blocks=1 00:07:29.560 00:07:29.560 ' 00:07:29.560 13:39:00 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:29.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.560 --rc genhtml_branch_coverage=1 00:07:29.560 --rc genhtml_function_coverage=1 00:07:29.560 --rc genhtml_legend=1 00:07:29.560 --rc geninfo_all_blocks=1 00:07:29.560 --rc geninfo_unexecuted_blocks=1 00:07:29.560 00:07:29.560 ' 00:07:29.560 13:39:00 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:29.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.560 --rc genhtml_branch_coverage=1 00:07:29.560 --rc genhtml_function_coverage=1 00:07:29.560 --rc genhtml_legend=1 00:07:29.560 --rc geninfo_all_blocks=1 00:07:29.560 --rc geninfo_unexecuted_blocks=1 00:07:29.560 00:07:29.560 ' 00:07:29.560 13:39:00 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:29.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.560 --rc genhtml_branch_coverage=1 00:07:29.560 --rc genhtml_function_coverage=1 00:07:29.560 --rc genhtml_legend=1 00:07:29.560 --rc geninfo_all_blocks=1 00:07:29.560 --rc geninfo_unexecuted_blocks=1 00:07:29.560 00:07:29.560 ' 00:07:29.560 13:39:00 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.560 13:39:00 -- nvmf/common.sh@7 -- # uname -s 00:07:29.560 13:39:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.560 13:39:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.560 13:39:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.560 13:39:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.560 13:39:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.560 13:39:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.560 13:39:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.560 13:39:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.560 13:39:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.560 13:39:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.560 13:39:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:29.560 13:39:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:29.560 13:39:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.560 13:39:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.560 13:39:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.560 13:39:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.560 13:39:00 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.560 13:39:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.560 13:39:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.560 13:39:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.560 13:39:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.560 13:39:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.560 13:39:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.560 13:39:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.560 13:39:00 -- paths/export.sh@5 -- # export PATH 00:07:29.560 13:39:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.560 13:39:00 -- nvmf/common.sh@51 -- # : 0 00:07:29.560 13:39:00 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.560 13:39:00 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.560 13:39:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.560 13:39:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.560 13:39:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.560 13:39:00 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.560 13:39:00 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.560 13:39:00 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.560 13:39:00 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.560 13:39:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:29.560 13:39:00 -- spdk/autotest.sh@32 -- # uname -s 00:07:29.560 13:39:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:29.560 13:39:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:29.560 13:39:00 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:29.560 13:39:00 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:07:29.560 13:39:00 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:29.560 13:39:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:29.560 13:39:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:29.560 13:39:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:29.560 13:39:00 -- spdk/autotest.sh@48 -- # udevadm_pid=2103331 00:07:29.560 13:39:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:29.560 13:39:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:29.560 13:39:00 -- pm/common@17 -- # local monitor 00:07:29.560 13:39:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.560 13:39:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.560 13:39:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.560 13:39:00 -- pm/common@21 -- # date +%s 00:07:29.560 13:39:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.560 13:39:00 -- pm/common@21 -- # date +%s 00:07:29.560 13:39:00 -- pm/common@25 -- # sleep 1 00:07:29.560 13:39:00 -- pm/common@21 -- # date +%s 00:07:29.560 13:39:00 -- pm/common@21 -- # date +%s 00:07:29.561 13:39:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402340 00:07:29.561 13:39:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402340 00:07:29.561 13:39:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402340 00:07:29.561 13:39:00 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402340 00:07:29.561 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402340_collect-vmstat.pm.log 00:07:29.561 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402340_collect-cpu-load.pm.log 00:07:29.561 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402340_collect-cpu-temp.pm.log 00:07:29.561 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402340_collect-bmc-pm.bmc.pm.log 00:07:30.518 13:39:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:30.518 13:39:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:30.518 13:39:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.518 13:39:01 -- common/autotest_common.sh@10 -- # set +x 00:07:30.518 13:39:01 -- spdk/autotest.sh@59 -- # create_test_list 00:07:30.518 13:39:01 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:30.518 13:39:01 -- common/autotest_common.sh@10 -- # set +x 00:07:30.518 13:39:01 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:07:30.518 13:39:01 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:30.518 13:39:01 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:30.518 13:39:01 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:30.518 13:39:01 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:30.518 13:39:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:30.518 13:39:01 -- common/autotest_common.sh@1457 -- # uname 00:07:30.518 13:39:01 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:30.518 13:39:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:30.518 13:39:01 -- common/autotest_common.sh@1477 -- # uname 00:07:30.518 13:39:01 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:30.518 13:39:01 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:30.518 13:39:01 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:30.518 lcov: LCOV version 1.15 00:07:30.519 13:39:02 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:07:48.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:48.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:08:10.543 13:39:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:10.543 13:39:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.543 13:39:39 -- common/autotest_common.sh@10 -- # set +x 00:08:10.543 13:39:39 -- spdk/autotest.sh@78 -- # rm -f 00:08:10.543 13:39:39 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:10.543 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:08:10.543 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:08:10.543 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:08:10.543 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:08:10.543 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:08:10.543 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:08:10.543 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:08:10.543 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:08:10.543 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:08:10.543 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:08:10.543 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:08:10.543 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:08:10.543 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:08:10.543 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:08:10.543 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:08:10.543 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:08:10.543 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:08:10.543 13:39:40 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:10.543 13:39:40 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:10.543 13:39:40 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:10.543 13:39:40 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:08:10.543 13:39:40 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:08:10.543 13:39:40 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:08:10.543 13:39:40 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:10.543 13:39:40 -- common/autotest_common.sh@1669 -- # bdf=0000:0b:00.0 00:08:10.543 13:39:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:10.543 13:39:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:08:10.543 13:39:40 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:10.543 13:39:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:10.543 13:39:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:10.543 13:39:40 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:10.543 13:39:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:10.543 13:39:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:10.543 13:39:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:10.543 13:39:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:10.543 13:39:40 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:10.543 No valid GPT data, bailing 00:08:10.543 13:39:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:10.543 13:39:40 -- scripts/common.sh@394 -- # pt= 00:08:10.543 13:39:40 -- scripts/common.sh@395 -- # return 1 00:08:10.543 13:39:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:10.543 1+0 records in 00:08:10.543 1+0 records out 00:08:10.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00208202 s, 504 MB/s 00:08:10.543 13:39:40 -- spdk/autotest.sh@105 -- # sync 00:08:10.543 13:39:40 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:10.543 13:39:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:10.543 13:39:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:11.922 13:39:43 -- spdk/autotest.sh@111 -- # uname -s 00:08:11.922 13:39:43 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:11.922 13:39:43 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:11.922 13:39:43 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:08:12.862 Hugepages 00:08:12.862 node hugesize free / total 00:08:12.862 node0 1048576kB 0 / 0 00:08:12.862 node0 2048kB 0 / 0 00:08:12.862 node1 1048576kB 0 / 0 00:08:12.862 node1 2048kB 0 / 0 00:08:12.862 00:08:12.862 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:12.862 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:08:12.862 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:08:12.862 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:08:12.862 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:08:12.862 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:08:12.862 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:08:12.862 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:08:12.862 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:08:12.862 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:08:12.862 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:08:12.862 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:08:13.121 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:08:13.121 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:08:13.121 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:08:13.121 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:08:13.121 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:08:13.121 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:08:13.121 13:39:44 -- spdk/autotest.sh@117 -- # uname -s 00:08:13.121 13:39:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:13.121 13:39:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:13.121 13:39:44 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:14.502 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:08:14.502 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:08:14.502 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:08:14.502 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:08:14.502 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:08:14.502 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:08:14.502 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:08:14.502 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:08:14.502 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:08:14.502 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:08:14.502 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:08:14.502 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:08:14.502 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:08:14.502 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:08:14.502 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:08:14.502 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:08:15.439 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:08:15.698 13:39:47 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:16.634 13:39:48 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:16.634 13:39:48 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:16.634 13:39:48 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:16.634 13:39:48 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:16.634 13:39:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:16.634 13:39:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:16.634 13:39:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:16.634 13:39:48 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:16.634 13:39:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:16.634 13:39:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:16.634 13:39:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:08:16.634 13:39:48 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:18.009 Waiting for block devices as requested 00:08:18.009 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:08:18.009 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:08:18.009 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:08:18.269 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:08:18.269 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:08:18.269 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:08:18.269 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:08:18.528 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:08:18.528 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:08:18.810 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:08:18.810 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:08:18.810 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:08:18.810 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:08:18.810 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:08:19.069 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:08:19.069 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:08:19.069 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:08:19.328 13:39:50 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:19.328 13:39:50 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:08:19.328 13:39:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:08:19.328 13:39:50 -- common/autotest_common.sh@1487 -- # grep 0000:0b:00.0/nvme/nvme 00:08:19.328 13:39:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:08:19.328 13:39:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:08:19.328 13:39:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:08:19.328 13:39:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:19.328 13:39:50 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:19.328 13:39:50 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:19.328 13:39:50 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:19.328 13:39:50 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:19.328 13:39:50 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:19.328 13:39:50 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:08:19.328 13:39:50 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:19.328 13:39:50 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:19.328 13:39:50 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:19.328 13:39:50 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:19.328 13:39:50 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:19.328 13:39:50 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:19.328 13:39:50 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:19.328 13:39:50 -- common/autotest_common.sh@1543 -- # continue 00:08:19.328 13:39:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:19.328 13:39:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:19.328 13:39:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.328 13:39:50 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:19.328 13:39:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.328 13:39:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.328 13:39:50 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:20.704 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:08:20.704 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:08:20.704 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:08:20.704 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:08:20.704 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:08:20.704 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:08:20.704 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:08:20.704 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:08:20.704 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:08:20.704 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:08:20.704 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:08:20.704 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:08:20.704 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:08:20.704 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:08:20.704 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:08:20.704 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:08:21.644 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:08:21.902 13:39:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:21.903 13:39:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:21.903 13:39:53 -- common/autotest_common.sh@10 -- # set +x 00:08:21.903 13:39:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:21.903 13:39:53 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:21.903 13:39:53 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:21.903 13:39:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:21.903 13:39:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:21.903 13:39:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:21.903 13:39:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:21.903 13:39:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:21.903 13:39:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:21.903 13:39:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:21.903 13:39:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:21.903 13:39:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:21.903 13:39:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:21.903 13:39:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:21.903 13:39:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:08:21.903 13:39:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:21.903 13:39:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:08:21.903 13:39:53 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:08:21.903 13:39:53 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:21.903 13:39:53 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:08:21.903 13:39:53 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:08:21.903 13:39:53 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:0b:00.0 00:08:21.903 13:39:53 -- common/autotest_common.sh@1579 -- # [[ -z 0000:0b:00.0 ]] 00:08:21.903 13:39:53 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2114394 00:08:21.903 13:39:53 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:21.903 13:39:53 -- common/autotest_common.sh@1585 -- # waitforlisten 2114394 00:08:21.903 13:39:53 -- common/autotest_common.sh@835 -- # '[' -z 2114394 ']' 00:08:21.903 13:39:53 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.903 13:39:53 -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.903 13:39:53 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.903 13:39:53 -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.903 13:39:53 -- common/autotest_common.sh@10 -- # set +x 00:08:21.903 [2024-12-05 13:39:53.332623] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:08:21.903 [2024-12-05 13:39:53.332718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2114394 ] 00:08:21.903 [2024-12-05 13:39:53.397863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.162 [2024-12-05 13:39:53.458779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.420 13:39:53 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.420 13:39:53 -- common/autotest_common.sh@868 -- # return 0 00:08:22.420 13:39:53 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:08:22.420 13:39:53 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:08:22.420 13:39:53 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:08:25.697 nvme0n1 00:08:25.697 13:39:56 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:25.697 [2024-12-05 13:39:57.075310] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:08:25.697 [2024-12-05 13:39:57.075351] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:08:25.697 request: 00:08:25.697 { 00:08:25.697 "nvme_ctrlr_name": "nvme0", 00:08:25.697 "password": "test", 00:08:25.697 "method": "bdev_nvme_opal_revert", 00:08:25.697 "req_id": 1 00:08:25.697 } 00:08:25.697 Got JSON-RPC error response 00:08:25.697 response: 00:08:25.697 { 00:08:25.697 "code": -32603, 00:08:25.697 "message": "Internal error" 00:08:25.697 } 00:08:25.697 13:39:57 -- common/autotest_common.sh@1591 -- # true 00:08:25.697 13:39:57 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:08:25.697 13:39:57 -- common/autotest_common.sh@1595 -- # killprocess 2114394 00:08:25.697 13:39:57 -- common/autotest_common.sh@954 -- # '[' -z 2114394 ']' 00:08:25.697 13:39:57 -- common/autotest_common.sh@958 -- # kill -0 2114394 00:08:25.697 13:39:57 -- common/autotest_common.sh@959 -- # uname 00:08:25.697 13:39:57 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.697 13:39:57 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2114394 00:08:25.697 13:39:57 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.697 13:39:57 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.697 13:39:57 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2114394' 00:08:25.697 killing process with pid 2114394 00:08:25.697 13:39:57 -- common/autotest_common.sh@973 -- # kill 2114394 00:08:25.697 13:39:57 -- common/autotest_common.sh@978 -- # wait 2114394 00:08:27.591 13:39:58 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:27.591 13:39:58 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:27.591 13:39:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:27.591 13:39:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:27.591 13:39:58 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:27.591 13:39:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.591 13:39:58 -- common/autotest_common.sh@10 -- # set +x 00:08:27.591 13:39:58 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:27.591 13:39:58 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:27.591 13:39:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.591 13:39:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.591 13:39:58 -- common/autotest_common.sh@10 -- # set +x 00:08:27.591 ************************************ 00:08:27.591 START TEST env 00:08:27.591 ************************************ 00:08:27.591 13:39:58 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:27.591 * Looking for test storage... 00:08:27.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:27.591 13:39:58 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:27.591 13:39:58 env -- common/autotest_common.sh@1711 -- # lcov --version 00:08:27.591 13:39:58 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:27.591 13:39:58 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:27.591 13:39:58 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.591 13:39:58 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.592 13:39:58 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.592 13:39:58 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.592 13:39:58 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.592 13:39:58 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.592 13:39:58 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.592 13:39:58 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.592 13:39:58 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.592 13:39:58 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.592 13:39:58 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.592 13:39:58 env -- scripts/common.sh@344 -- # case "$op" in 00:08:27.592 13:39:58 env -- scripts/common.sh@345 -- # : 1 00:08:27.592 13:39:58 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.592 13:39:58 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.592 13:39:58 env -- scripts/common.sh@365 -- # decimal 1 00:08:27.592 13:39:58 env -- scripts/common.sh@353 -- # local d=1 00:08:27.592 13:39:58 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.592 13:39:59 env -- scripts/common.sh@355 -- # echo 1 00:08:27.592 13:39:59 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.592 13:39:59 env -- scripts/common.sh@366 -- # decimal 2 00:08:27.592 13:39:59 env -- scripts/common.sh@353 -- # local d=2 00:08:27.592 13:39:59 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.592 13:39:59 env -- scripts/common.sh@355 -- # echo 2 00:08:27.592 13:39:59 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.592 13:39:59 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.592 13:39:59 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.592 13:39:59 env -- scripts/common.sh@368 -- # return 0 00:08:27.592 13:39:59 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.592 13:39:59 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:27.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.592 --rc genhtml_branch_coverage=1 00:08:27.592 --rc genhtml_function_coverage=1 00:08:27.592 --rc genhtml_legend=1 00:08:27.592 --rc geninfo_all_blocks=1 00:08:27.592 --rc geninfo_unexecuted_blocks=1 00:08:27.592 00:08:27.592 ' 00:08:27.592 13:39:59 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:27.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.592 --rc genhtml_branch_coverage=1 00:08:27.592 --rc genhtml_function_coverage=1 00:08:27.592 --rc genhtml_legend=1 00:08:27.592 --rc geninfo_all_blocks=1 00:08:27.592 --rc geninfo_unexecuted_blocks=1 00:08:27.592 00:08:27.592 ' 00:08:27.592 13:39:59 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:27.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.592 --rc genhtml_branch_coverage=1 00:08:27.592 --rc genhtml_function_coverage=1 00:08:27.592 --rc genhtml_legend=1 00:08:27.592 --rc geninfo_all_blocks=1 00:08:27.592 --rc geninfo_unexecuted_blocks=1 00:08:27.592 00:08:27.592 ' 00:08:27.592 13:39:59 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:27.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.592 --rc genhtml_branch_coverage=1 00:08:27.592 --rc genhtml_function_coverage=1 00:08:27.592 --rc genhtml_legend=1 00:08:27.592 --rc geninfo_all_blocks=1 00:08:27.592 --rc geninfo_unexecuted_blocks=1 00:08:27.592 00:08:27.592 ' 00:08:27.592 13:39:59 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:27.592 13:39:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.592 13:39:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.592 13:39:59 env -- common/autotest_common.sh@10 -- # set +x 00:08:27.592 ************************************ 00:08:27.592 START TEST env_memory 00:08:27.592 ************************************ 00:08:27.592 13:39:59 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:27.592 00:08:27.592 00:08:27.592 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.592 http://cunit.sourceforge.net/ 00:08:27.592 00:08:27.592 00:08:27.592 Suite: memory 00:08:27.592 Test: alloc and free memory map ...[2024-12-05 13:39:59.067809] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:27.592 passed 00:08:27.592 Test: mem map translation ...[2024-12-05 13:39:59.087794] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:27.592 [2024-12-05 13:39:59.087815] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:27.592 [2024-12-05 13:39:59.087855] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:27.592 [2024-12-05 13:39:59.087867] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:27.849 passed 00:08:27.849 Test: mem map registration ...[2024-12-05 13:39:59.128781] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:27.849 [2024-12-05 13:39:59.128801] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:27.849 passed 00:08:27.849 Test: mem map adjacent registrations ...passed 00:08:27.849 00:08:27.849 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.849 suites 1 1 n/a 0 0 00:08:27.849 tests 4 4 4 0 0 00:08:27.849 asserts 152 152 152 0 n/a 00:08:27.849 00:08:27.849 Elapsed time = 0.141 seconds 00:08:27.849 00:08:27.849 real 0m0.150s 00:08:27.849 user 0m0.139s 00:08:27.849 sys 0m0.010s 00:08:27.849 13:39:59 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.849 13:39:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:27.849 ************************************ 00:08:27.849 END TEST env_memory 00:08:27.849 ************************************ 00:08:27.849 13:39:59 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:27.849 13:39:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.849 13:39:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.849 13:39:59 env -- common/autotest_common.sh@10 -- # set +x 00:08:27.849 ************************************ 00:08:27.849 START TEST env_vtophys 00:08:27.849 ************************************ 00:08:27.849 13:39:59 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:27.849 EAL: lib.eal log level changed from notice to debug 00:08:27.849 EAL: Detected lcore 0 as core 0 on socket 0 00:08:27.849 EAL: Detected lcore 1 as core 1 on socket 0 00:08:27.849 EAL: Detected lcore 2 as core 2 on socket 0 00:08:27.849 EAL: Detected lcore 3 as core 3 on socket 0 00:08:27.849 EAL: Detected lcore 4 as core 4 on socket 0 00:08:27.849 EAL: Detected lcore 5 as core 5 on socket 0 00:08:27.849 EAL: Detected lcore 6 as core 8 on socket 0 00:08:27.849 EAL: Detected lcore 7 as core 9 on socket 0 00:08:27.849 EAL: Detected lcore 8 as core 10 on socket 0 00:08:27.849 EAL: Detected lcore 9 as core 11 on socket 0 00:08:27.849 EAL: Detected lcore 10 as core 12 on socket 0 00:08:27.849 EAL: Detected lcore 11 as core 13 on socket 0 00:08:27.849 EAL: Detected lcore 12 as core 0 on socket 1 00:08:27.849 EAL: Detected lcore 13 as core 1 on socket 1 00:08:27.849 EAL: Detected lcore 14 as core 2 on socket 1 00:08:27.849 EAL: Detected lcore 15 as core 3 on socket 1 00:08:27.849 EAL: Detected lcore 16 as core 4 on socket 1 00:08:27.849 EAL: Detected lcore 17 as core 5 on socket 1 00:08:27.849 EAL: Detected lcore 18 as core 8 on socket 1 00:08:27.849 EAL: Detected lcore 19 as core 9 on socket 1 00:08:27.849 EAL: Detected lcore 20 as core 10 on socket 1 00:08:27.849 EAL: Detected lcore 21 as core 11 on socket 1 00:08:27.849 EAL: Detected lcore 22 as core 12 on socket 1 00:08:27.849 EAL: Detected lcore 23 as core 13 on socket 1 00:08:27.849 EAL: Detected lcore 24 as core 0 on socket 0 00:08:27.849 EAL: Detected lcore 25 as core 1 on socket 0 00:08:27.849 EAL: Detected lcore 26 as core 2 on socket 0 00:08:27.850 EAL: Detected lcore 27 as core 3 on socket 0 00:08:27.850 EAL: Detected lcore 28 as core 4 on socket 0 00:08:27.850 EAL: Detected lcore 29 as core 5 on socket 0 00:08:27.850 EAL: Detected lcore 30 as core 8 on socket 0 00:08:27.850 EAL: Detected lcore 31 as core 9 on socket 0 00:08:27.850 EAL: Detected lcore 32 as core 10 on socket 0 00:08:27.850 EAL: Detected lcore 33 as core 11 on socket 0 00:08:27.850 EAL: Detected lcore 34 as core 12 on socket 0 00:08:27.850 EAL: Detected lcore 35 as core 13 on socket 0 00:08:27.850 EAL: Detected lcore 36 as core 0 on socket 1 00:08:27.850 EAL: Detected lcore 37 as core 1 on socket 1 00:08:27.850 EAL: Detected lcore 38 as core 2 on socket 1 00:08:27.850 EAL: Detected lcore 39 as core 3 on socket 1 00:08:27.850 EAL: Detected lcore 40 as core 4 on socket 1 00:08:27.850 EAL: Detected lcore 41 as core 5 on socket 1 00:08:27.850 EAL: Detected lcore 42 as core 8 on socket 1 00:08:27.850 EAL: Detected lcore 43 as core 9 on socket 1 00:08:27.850 EAL: Detected lcore 44 as core 10 on socket 1 00:08:27.850 EAL: Detected lcore 45 as core 11 on socket 1 00:08:27.850 EAL: Detected lcore 46 as core 12 on socket 1 00:08:27.850 EAL: Detected lcore 47 as core 13 on socket 1 00:08:27.850 EAL: Maximum logical cores by configuration: 128 00:08:27.850 EAL: Detected CPU lcores: 48 00:08:27.850 EAL: Detected NUMA nodes: 2 00:08:27.850 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:27.850 EAL: Detected shared linkage of DPDK 00:08:27.850 EAL: No shared files mode enabled, IPC will be disabled 00:08:27.850 EAL: Bus pci wants IOVA as 'DC' 00:08:27.850 EAL: Buses did not request a specific IOVA mode. 00:08:27.850 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:27.850 EAL: Selected IOVA mode 'VA' 00:08:27.850 EAL: Probing VFIO support... 00:08:27.850 EAL: IOMMU type 1 (Type 1) is supported 00:08:27.850 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:27.850 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:27.850 EAL: VFIO support initialized 00:08:27.850 EAL: Ask a virtual area of 0x2e000 bytes 00:08:27.850 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:27.850 EAL: Setting up physically contiguous memory... 00:08:27.850 EAL: Setting maximum number of open files to 524288 00:08:27.850 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:27.850 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:27.850 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:27.850 EAL: Ask a virtual area of 0x61000 bytes 00:08:27.850 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:27.850 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:27.850 EAL: Ask a virtual area of 0x400000000 bytes 00:08:27.850 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:27.850 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:27.850 EAL: Ask a virtual area of 0x61000 bytes 00:08:27.850 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:27.850 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:27.850 EAL: Ask a virtual area of 0x400000000 bytes 00:08:27.850 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:27.850 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:27.850 EAL: Ask a virtual area of 0x61000 bytes 00:08:27.850 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:27.850 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:27.850 EAL: Ask a virtual area of 0x400000000 bytes 00:08:27.850 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:27.850 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:27.850 EAL: Ask a virtual area of 0x61000 bytes 00:08:27.850 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:27.850 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:27.850 EAL: Ask a virtual area of 0x400000000 bytes 00:08:27.850 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:27.850 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:27.850 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:27.850 EAL: Ask a virtual area of 0x61000 bytes 00:08:27.850 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:27.850 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:27.850 EAL: Ask a virtual area of 0x400000000 bytes 00:08:27.850 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:27.850 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:27.850 EAL: Ask a virtual area of 0x61000 bytes 00:08:27.850 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:27.850 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:27.850 EAL: Ask a virtual area of 0x400000000 bytes 00:08:27.850 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:27.850 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:27.850 EAL: Ask a virtual area of 0x61000 bytes 00:08:27.850 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:27.850 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:27.850 EAL: Ask a virtual area of 0x400000000 bytes 00:08:27.850 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:27.850 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:27.850 EAL: Ask a virtual area of 0x61000 bytes 00:08:27.850 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:27.850 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:27.850 EAL: Ask a virtual area of 0x400000000 bytes 00:08:27.850 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:27.850 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:27.850 EAL: Hugepages will be freed exactly as allocated. 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: TSC frequency is ~2700000 KHz 00:08:27.850 EAL: Main lcore 0 is ready (tid=7fc182416a00;cpuset=[0]) 00:08:27.850 EAL: Trying to obtain current memory policy. 00:08:27.850 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:27.850 EAL: Restoring previous memory policy: 0 00:08:27.850 EAL: request: mp_malloc_sync 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: Heap on socket 0 was expanded by 2MB 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:27.850 EAL: Mem event callback 'spdk:(nil)' registered 00:08:27.850 00:08:27.850 00:08:27.850 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.850 http://cunit.sourceforge.net/ 00:08:27.850 00:08:27.850 00:08:27.850 Suite: components_suite 00:08:27.850 Test: vtophys_malloc_test ...passed 00:08:27.850 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:27.850 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:27.850 EAL: Restoring previous memory policy: 4 00:08:27.850 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.850 EAL: request: mp_malloc_sync 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: Heap on socket 0 was expanded by 4MB 00:08:27.850 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.850 EAL: request: mp_malloc_sync 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: Heap on socket 0 was shrunk by 4MB 00:08:27.850 EAL: Trying to obtain current memory policy. 00:08:27.850 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:27.850 EAL: Restoring previous memory policy: 4 00:08:27.850 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.850 EAL: request: mp_malloc_sync 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: Heap on socket 0 was expanded by 6MB 00:08:27.850 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.850 EAL: request: mp_malloc_sync 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: Heap on socket 0 was shrunk by 6MB 00:08:27.850 EAL: Trying to obtain current memory policy. 00:08:27.850 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:27.850 EAL: Restoring previous memory policy: 4 00:08:27.850 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.850 EAL: request: mp_malloc_sync 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: Heap on socket 0 was expanded by 10MB 00:08:27.850 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.850 EAL: request: mp_malloc_sync 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: Heap on socket 0 was shrunk by 10MB 00:08:27.850 EAL: Trying to obtain current memory policy. 00:08:27.850 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:27.850 EAL: Restoring previous memory policy: 4 00:08:27.850 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.850 EAL: request: mp_malloc_sync 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: Heap on socket 0 was expanded by 18MB 00:08:27.850 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.850 EAL: request: mp_malloc_sync 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: Heap on socket 0 was shrunk by 18MB 00:08:27.850 EAL: Trying to obtain current memory policy. 00:08:27.850 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:27.850 EAL: Restoring previous memory policy: 4 00:08:27.850 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.850 EAL: request: mp_malloc_sync 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: Heap on socket 0 was expanded by 34MB 00:08:27.850 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.850 EAL: request: mp_malloc_sync 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: Heap on socket 0 was shrunk by 34MB 00:08:27.850 EAL: Trying to obtain current memory policy. 00:08:27.850 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:27.850 EAL: Restoring previous memory policy: 4 00:08:27.850 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.850 EAL: request: mp_malloc_sync 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: Heap on socket 0 was expanded by 66MB 00:08:27.850 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.850 EAL: request: mp_malloc_sync 00:08:27.850 EAL: No shared files mode enabled, IPC is disabled 00:08:27.850 EAL: Heap on socket 0 was shrunk by 66MB 00:08:27.850 EAL: Trying to obtain current memory policy. 00:08:27.850 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.108 EAL: Restoring previous memory policy: 4 00:08:28.108 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.108 EAL: request: mp_malloc_sync 00:08:28.108 EAL: No shared files mode enabled, IPC is disabled 00:08:28.108 EAL: Heap on socket 0 was expanded by 130MB 00:08:28.108 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.108 EAL: request: mp_malloc_sync 00:08:28.109 EAL: No shared files mode enabled, IPC is disabled 00:08:28.109 EAL: Heap on socket 0 was shrunk by 130MB 00:08:28.109 EAL: Trying to obtain current memory policy. 00:08:28.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.109 EAL: Restoring previous memory policy: 4 00:08:28.109 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.109 EAL: request: mp_malloc_sync 00:08:28.109 EAL: No shared files mode enabled, IPC is disabled 00:08:28.109 EAL: Heap on socket 0 was expanded by 258MB 00:08:28.109 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.109 EAL: request: mp_malloc_sync 00:08:28.109 EAL: No shared files mode enabled, IPC is disabled 00:08:28.109 EAL: Heap on socket 0 was shrunk by 258MB 00:08:28.109 EAL: Trying to obtain current memory policy. 00:08:28.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.365 EAL: Restoring previous memory policy: 4 00:08:28.365 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.365 EAL: request: mp_malloc_sync 00:08:28.365 EAL: No shared files mode enabled, IPC is disabled 00:08:28.365 EAL: Heap on socket 0 was expanded by 514MB 00:08:28.365 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.623 EAL: request: mp_malloc_sync 00:08:28.623 EAL: No shared files mode enabled, IPC is disabled 00:08:28.623 EAL: Heap on socket 0 was shrunk by 514MB 00:08:28.623 EAL: Trying to obtain current memory policy. 00:08:28.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.881 EAL: Restoring previous memory policy: 4 00:08:28.881 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.881 EAL: request: mp_malloc_sync 00:08:28.881 EAL: No shared files mode enabled, IPC is disabled 00:08:28.881 EAL: Heap on socket 0 was expanded by 1026MB 00:08:29.137 EAL: Calling mem event callback 'spdk:(nil)' 00:08:29.395 EAL: request: mp_malloc_sync 00:08:29.395 EAL: No shared files mode enabled, IPC is disabled 00:08:29.395 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:29.395 passed 00:08:29.395 00:08:29.395 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.395 suites 1 1 n/a 0 0 00:08:29.395 tests 2 2 2 0 0 00:08:29.395 asserts 497 497 497 0 n/a 00:08:29.395 00:08:29.395 Elapsed time = 1.332 seconds 00:08:29.395 EAL: Calling mem event callback 'spdk:(nil)' 00:08:29.395 EAL: request: mp_malloc_sync 00:08:29.395 EAL: No shared files mode enabled, IPC is disabled 00:08:29.395 EAL: Heap on socket 0 was shrunk by 2MB 00:08:29.395 EAL: No shared files mode enabled, IPC is disabled 00:08:29.395 EAL: No shared files mode enabled, IPC is disabled 00:08:29.395 EAL: No shared files mode enabled, IPC is disabled 00:08:29.395 00:08:29.395 real 0m1.451s 00:08:29.395 user 0m0.838s 00:08:29.395 sys 0m0.581s 00:08:29.395 13:40:00 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.395 13:40:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:29.395 ************************************ 00:08:29.395 END TEST env_vtophys 00:08:29.395 ************************************ 00:08:29.395 13:40:00 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:29.395 13:40:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.395 13:40:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.395 13:40:00 env -- common/autotest_common.sh@10 -- # set +x 00:08:29.395 ************************************ 00:08:29.395 START TEST env_pci 00:08:29.395 ************************************ 00:08:29.395 13:40:00 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:29.395 00:08:29.395 00:08:29.395 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.395 http://cunit.sourceforge.net/ 00:08:29.395 00:08:29.395 00:08:29.395 Suite: pci 00:08:29.395 Test: pci_hook ...[2024-12-05 13:40:00.741042] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2115291 has claimed it 00:08:29.395 EAL: Cannot find device (10000:00:01.0) 00:08:29.395 EAL: Failed to attach device on primary process 00:08:29.395 passed 00:08:29.395 00:08:29.395 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.395 suites 1 1 n/a 0 0 00:08:29.395 tests 1 1 1 0 0 00:08:29.395 asserts 25 25 25 0 n/a 00:08:29.395 00:08:29.395 Elapsed time = 0.019 seconds 00:08:29.395 00:08:29.395 real 0m0.031s 00:08:29.395 user 0m0.010s 00:08:29.395 sys 0m0.020s 00:08:29.395 13:40:00 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.395 13:40:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:29.395 ************************************ 00:08:29.395 END TEST env_pci 00:08:29.395 ************************************ 00:08:29.395 13:40:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:29.395 13:40:00 env -- env/env.sh@15 -- # uname 00:08:29.395 13:40:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:29.395 13:40:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:29.395 13:40:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:29.395 13:40:00 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:29.395 13:40:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.395 13:40:00 env -- common/autotest_common.sh@10 -- # set +x 00:08:29.395 ************************************ 00:08:29.395 START TEST env_dpdk_post_init 00:08:29.395 ************************************ 00:08:29.395 13:40:00 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:29.395 EAL: Detected CPU lcores: 48 00:08:29.395 EAL: Detected NUMA nodes: 2 00:08:29.395 EAL: Detected shared linkage of DPDK 00:08:29.395 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:29.395 EAL: Selected IOVA mode 'VA' 00:08:29.395 EAL: VFIO support initialized 00:08:29.395 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:29.689 EAL: Using IOMMU type 1 (Type 1) 00:08:29.689 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:08:29.689 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:08:29.689 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:08:29.689 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:08:29.689 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:08:29.689 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:08:29.689 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:08:29.689 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:08:30.278 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:08:30.278 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:08:30.279 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:08:30.279 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:08:30.544 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:08:30.544 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:08:30.544 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:08:30.544 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:08:30.544 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:08:33.820 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:08:33.820 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:08:33.820 Starting DPDK initialization... 00:08:33.820 Starting SPDK post initialization... 00:08:33.820 SPDK NVMe probe 00:08:33.820 Attaching to 0000:0b:00.0 00:08:33.820 Attached to 0000:0b:00.0 00:08:33.820 Cleaning up... 00:08:33.820 00:08:33.820 real 0m4.369s 00:08:33.820 user 0m3.010s 00:08:33.820 sys 0m0.422s 00:08:33.820 13:40:05 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.820 13:40:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:33.820 ************************************ 00:08:33.820 END TEST env_dpdk_post_init 00:08:33.820 ************************************ 00:08:33.820 13:40:05 env -- env/env.sh@26 -- # uname 00:08:33.820 13:40:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:33.820 13:40:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:33.820 13:40:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.820 13:40:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.820 13:40:05 env -- common/autotest_common.sh@10 -- # set +x 00:08:33.820 ************************************ 00:08:33.820 START TEST env_mem_callbacks 00:08:33.820 ************************************ 00:08:33.820 13:40:05 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:33.820 EAL: Detected CPU lcores: 48 00:08:33.820 EAL: Detected NUMA nodes: 2 00:08:33.820 EAL: Detected shared linkage of DPDK 00:08:33.820 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:33.820 EAL: Selected IOVA mode 'VA' 00:08:33.820 EAL: VFIO support initialized 00:08:33.820 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:33.820 00:08:33.820 00:08:33.820 CUnit - A unit testing framework for C - Version 2.1-3 00:08:33.820 http://cunit.sourceforge.net/ 00:08:33.820 00:08:33.820 00:08:33.820 Suite: memory 00:08:33.820 Test: test ... 00:08:33.820 register 0x200000200000 2097152 00:08:33.820 malloc 3145728 00:08:33.820 register 0x200000400000 4194304 00:08:33.820 buf 0x200000500000 len 3145728 PASSED 00:08:33.820 malloc 64 00:08:33.820 buf 0x2000004fff40 len 64 PASSED 00:08:33.820 malloc 4194304 00:08:33.820 register 0x200000800000 6291456 00:08:33.820 buf 0x200000a00000 len 4194304 PASSED 00:08:33.820 free 0x200000500000 3145728 00:08:33.820 free 0x2000004fff40 64 00:08:33.820 unregister 0x200000400000 4194304 PASSED 00:08:33.820 free 0x200000a00000 4194304 00:08:33.820 unregister 0x200000800000 6291456 PASSED 00:08:33.820 malloc 8388608 00:08:33.820 register 0x200000400000 10485760 00:08:33.820 buf 0x200000600000 len 8388608 PASSED 00:08:33.820 free 0x200000600000 8388608 00:08:33.820 unregister 0x200000400000 10485760 PASSED 00:08:33.821 passed 00:08:33.821 00:08:33.821 Run Summary: Type Total Ran Passed Failed Inactive 00:08:33.821 suites 1 1 n/a 0 0 00:08:33.821 tests 1 1 1 0 0 00:08:33.821 asserts 15 15 15 0 n/a 00:08:33.821 00:08:33.821 Elapsed time = 0.005 seconds 00:08:33.821 00:08:33.821 real 0m0.048s 00:08:33.821 user 0m0.011s 00:08:33.821 sys 0m0.037s 00:08:33.821 13:40:05 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.821 13:40:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:33.821 ************************************ 00:08:33.821 END TEST env_mem_callbacks 00:08:33.821 ************************************ 00:08:33.821 00:08:33.821 real 0m6.447s 00:08:33.821 user 0m4.217s 00:08:33.821 sys 0m1.282s 00:08:33.821 13:40:05 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.821 13:40:05 env -- common/autotest_common.sh@10 -- # set +x 00:08:33.821 ************************************ 00:08:33.821 END TEST env 00:08:33.821 ************************************ 00:08:33.821 13:40:05 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:33.821 13:40:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.821 13:40:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.821 13:40:05 -- common/autotest_common.sh@10 -- # set +x 00:08:34.079 ************************************ 00:08:34.079 START TEST rpc 00:08:34.079 ************************************ 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:34.079 * Looking for test storage... 00:08:34.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:34.079 13:40:05 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.079 13:40:05 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.079 13:40:05 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.079 13:40:05 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.079 13:40:05 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.079 13:40:05 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.079 13:40:05 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.079 13:40:05 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.079 13:40:05 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.079 13:40:05 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.079 13:40:05 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.079 13:40:05 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:34.079 13:40:05 rpc -- scripts/common.sh@345 -- # : 1 00:08:34.079 13:40:05 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.079 13:40:05 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.079 13:40:05 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:34.079 13:40:05 rpc -- scripts/common.sh@353 -- # local d=1 00:08:34.079 13:40:05 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.079 13:40:05 rpc -- scripts/common.sh@355 -- # echo 1 00:08:34.079 13:40:05 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.079 13:40:05 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:34.079 13:40:05 rpc -- scripts/common.sh@353 -- # local d=2 00:08:34.079 13:40:05 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.079 13:40:05 rpc -- scripts/common.sh@355 -- # echo 2 00:08:34.079 13:40:05 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.079 13:40:05 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.079 13:40:05 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.079 13:40:05 rpc -- scripts/common.sh@368 -- # return 0 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:34.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.079 --rc genhtml_branch_coverage=1 00:08:34.079 --rc genhtml_function_coverage=1 00:08:34.079 --rc genhtml_legend=1 00:08:34.079 --rc geninfo_all_blocks=1 00:08:34.079 --rc geninfo_unexecuted_blocks=1 00:08:34.079 00:08:34.079 ' 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:34.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.079 --rc genhtml_branch_coverage=1 00:08:34.079 --rc genhtml_function_coverage=1 00:08:34.079 --rc genhtml_legend=1 00:08:34.079 --rc geninfo_all_blocks=1 00:08:34.079 --rc geninfo_unexecuted_blocks=1 00:08:34.079 00:08:34.079 ' 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:34.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.079 --rc genhtml_branch_coverage=1 00:08:34.079 --rc genhtml_function_coverage=1 00:08:34.079 --rc genhtml_legend=1 00:08:34.079 --rc geninfo_all_blocks=1 00:08:34.079 --rc geninfo_unexecuted_blocks=1 00:08:34.079 00:08:34.079 ' 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:34.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.079 --rc genhtml_branch_coverage=1 00:08:34.079 --rc genhtml_function_coverage=1 00:08:34.079 --rc genhtml_legend=1 00:08:34.079 --rc geninfo_all_blocks=1 00:08:34.079 --rc geninfo_unexecuted_blocks=1 00:08:34.079 00:08:34.079 ' 00:08:34.079 13:40:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2116032 00:08:34.079 13:40:05 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:34.079 13:40:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:34.079 13:40:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2116032 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@835 -- # '[' -z 2116032 ']' 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.079 13:40:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.079 [2024-12-05 13:40:05.564988] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:08:34.079 [2024-12-05 13:40:05.565080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116032 ] 00:08:34.337 [2024-12-05 13:40:05.631627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.337 [2024-12-05 13:40:05.686655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:34.337 [2024-12-05 13:40:05.686725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2116032' to capture a snapshot of events at runtime. 00:08:34.337 [2024-12-05 13:40:05.686739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.337 [2024-12-05 13:40:05.686751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.337 [2024-12-05 13:40:05.686760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2116032 for offline analysis/debug. 00:08:34.337 [2024-12-05 13:40:05.687367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.595 13:40:05 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.595 13:40:05 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:34.595 13:40:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:34.595 13:40:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:34.595 13:40:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:34.595 13:40:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:34.595 13:40:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.595 13:40:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.595 13:40:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.595 ************************************ 00:08:34.595 START TEST rpc_integrity 00:08:34.595 ************************************ 00:08:34.595 13:40:05 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:34.595 13:40:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:34.595 13:40:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.595 13:40:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.595 13:40:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.595 13:40:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:34.595 13:40:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:34.595 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:34.595 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:34.595 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.595 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.595 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.595 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:34.595 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:34.595 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.595 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.595 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.595 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:34.595 { 00:08:34.595 "name": "Malloc0", 00:08:34.595 "aliases": [ 00:08:34.595 "98f89ff8-3adb-4838-821e-193261b3761c" 00:08:34.595 ], 00:08:34.595 "product_name": "Malloc disk", 00:08:34.595 "block_size": 512, 00:08:34.595 "num_blocks": 16384, 00:08:34.595 "uuid": "98f89ff8-3adb-4838-821e-193261b3761c", 00:08:34.595 "assigned_rate_limits": { 00:08:34.595 "rw_ios_per_sec": 0, 00:08:34.595 "rw_mbytes_per_sec": 0, 00:08:34.595 "r_mbytes_per_sec": 0, 00:08:34.596 "w_mbytes_per_sec": 0 00:08:34.596 }, 00:08:34.596 "claimed": false, 00:08:34.596 "zoned": false, 00:08:34.596 "supported_io_types": { 00:08:34.596 "read": true, 00:08:34.596 "write": true, 00:08:34.596 "unmap": true, 00:08:34.596 "flush": true, 00:08:34.596 "reset": true, 00:08:34.596 "nvme_admin": false, 00:08:34.596 "nvme_io": false, 00:08:34.596 "nvme_io_md": false, 00:08:34.596 "write_zeroes": true, 00:08:34.596 "zcopy": true, 00:08:34.596 "get_zone_info": false, 00:08:34.596 "zone_management": false, 00:08:34.596 "zone_append": false, 00:08:34.596 "compare": false, 00:08:34.596 "compare_and_write": false, 00:08:34.596 "abort": true, 00:08:34.596 "seek_hole": false, 00:08:34.596 "seek_data": false, 00:08:34.596 "copy": true, 00:08:34.596 "nvme_iov_md": false 00:08:34.596 }, 00:08:34.596 "memory_domains": [ 00:08:34.596 { 00:08:34.596 "dma_device_id": "system", 00:08:34.596 "dma_device_type": 1 00:08:34.596 }, 00:08:34.596 { 00:08:34.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.596 "dma_device_type": 2 00:08:34.596 } 00:08:34.596 ], 00:08:34.596 "driver_specific": {} 00:08:34.596 } 00:08:34.596 ]' 00:08:34.596 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:34.596 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:34.596 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:34.596 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.596 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.596 [2024-12-05 13:40:06.085109] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:34.596 [2024-12-05 13:40:06.085144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.596 [2024-12-05 13:40:06.085180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c63130 00:08:34.596 [2024-12-05 13:40:06.085192] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.596 [2024-12-05 13:40:06.086519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.596 [2024-12-05 13:40:06.086544] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:34.596 Passthru0 00:08:34.596 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.596 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:34.596 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.596 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.596 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.596 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:34.596 { 00:08:34.596 "name": "Malloc0", 00:08:34.596 "aliases": [ 00:08:34.596 "98f89ff8-3adb-4838-821e-193261b3761c" 00:08:34.596 ], 00:08:34.596 "product_name": "Malloc disk", 00:08:34.596 "block_size": 512, 00:08:34.596 "num_blocks": 16384, 00:08:34.596 "uuid": "98f89ff8-3adb-4838-821e-193261b3761c", 00:08:34.596 "assigned_rate_limits": { 00:08:34.596 "rw_ios_per_sec": 0, 00:08:34.596 "rw_mbytes_per_sec": 0, 00:08:34.596 "r_mbytes_per_sec": 0, 00:08:34.596 "w_mbytes_per_sec": 0 00:08:34.596 }, 00:08:34.596 "claimed": true, 00:08:34.596 "claim_type": "exclusive_write", 00:08:34.596 "zoned": false, 00:08:34.596 "supported_io_types": { 00:08:34.596 "read": true, 00:08:34.596 "write": true, 00:08:34.596 "unmap": true, 00:08:34.596 "flush": true, 00:08:34.596 "reset": true, 00:08:34.596 "nvme_admin": false, 00:08:34.596 "nvme_io": false, 00:08:34.596 "nvme_io_md": false, 00:08:34.596 "write_zeroes": true, 00:08:34.596 "zcopy": true, 00:08:34.596 "get_zone_info": false, 00:08:34.596 "zone_management": false, 00:08:34.596 "zone_append": false, 00:08:34.596 "compare": false, 00:08:34.596 "compare_and_write": false, 00:08:34.596 "abort": true, 00:08:34.596 "seek_hole": false, 00:08:34.596 "seek_data": false, 00:08:34.596 "copy": true, 00:08:34.596 "nvme_iov_md": false 00:08:34.596 }, 00:08:34.596 "memory_domains": [ 00:08:34.596 { 00:08:34.596 "dma_device_id": "system", 00:08:34.596 "dma_device_type": 1 00:08:34.596 }, 00:08:34.596 { 00:08:34.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.596 "dma_device_type": 2 00:08:34.596 } 00:08:34.596 ], 00:08:34.596 "driver_specific": {} 00:08:34.596 }, 00:08:34.596 { 00:08:34.596 "name": "Passthru0", 00:08:34.596 "aliases": [ 00:08:34.596 "b95f5e86-862c-59b1-95ae-b6563a627d69" 00:08:34.596 ], 00:08:34.596 "product_name": "passthru", 00:08:34.596 "block_size": 512, 00:08:34.596 "num_blocks": 16384, 00:08:34.596 "uuid": "b95f5e86-862c-59b1-95ae-b6563a627d69", 00:08:34.596 "assigned_rate_limits": { 00:08:34.596 "rw_ios_per_sec": 0, 00:08:34.596 "rw_mbytes_per_sec": 0, 00:08:34.596 "r_mbytes_per_sec": 0, 00:08:34.596 "w_mbytes_per_sec": 0 00:08:34.596 }, 00:08:34.596 "claimed": false, 00:08:34.596 "zoned": false, 00:08:34.596 "supported_io_types": { 00:08:34.596 "read": true, 00:08:34.596 "write": true, 00:08:34.596 "unmap": true, 00:08:34.596 "flush": true, 00:08:34.596 "reset": true, 00:08:34.596 "nvme_admin": false, 00:08:34.596 "nvme_io": false, 00:08:34.596 "nvme_io_md": false, 00:08:34.596 "write_zeroes": true, 00:08:34.596 "zcopy": true, 00:08:34.596 "get_zone_info": false, 00:08:34.596 "zone_management": false, 00:08:34.596 "zone_append": false, 00:08:34.596 "compare": false, 00:08:34.596 "compare_and_write": false, 00:08:34.596 "abort": true, 00:08:34.596 "seek_hole": false, 00:08:34.596 "seek_data": false, 00:08:34.596 "copy": true, 00:08:34.596 "nvme_iov_md": false 00:08:34.596 }, 00:08:34.596 "memory_domains": [ 00:08:34.596 { 00:08:34.596 "dma_device_id": "system", 00:08:34.596 "dma_device_type": 1 00:08:34.596 }, 00:08:34.596 { 00:08:34.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.596 "dma_device_type": 2 00:08:34.596 } 00:08:34.596 ], 00:08:34.596 "driver_specific": { 00:08:34.596 "passthru": { 00:08:34.596 "name": "Passthru0", 00:08:34.596 "base_bdev_name": "Malloc0" 00:08:34.596 } 00:08:34.596 } 00:08:34.596 } 00:08:34.596 ]' 00:08:34.596 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:34.855 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:34.855 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:34.855 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.855 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.855 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.855 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:34.855 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.855 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.855 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.855 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:34.855 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.855 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.855 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.855 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:34.855 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:34.855 13:40:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:34.855 00:08:34.855 real 0m0.214s 00:08:34.855 user 0m0.136s 00:08:34.855 sys 0m0.020s 00:08:34.855 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.855 13:40:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.855 ************************************ 00:08:34.855 END TEST rpc_integrity 00:08:34.855 ************************************ 00:08:34.855 13:40:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:34.855 13:40:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.855 13:40:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.855 13:40:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.855 ************************************ 00:08:34.855 START TEST rpc_plugins 00:08:34.855 ************************************ 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:34.855 13:40:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.855 13:40:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:34.855 13:40:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.855 13:40:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:34.855 { 00:08:34.855 "name": "Malloc1", 00:08:34.855 "aliases": [ 00:08:34.855 "6bcca90f-babe-479c-ae5b-3d9f92188d68" 00:08:34.855 ], 00:08:34.855 "product_name": "Malloc disk", 00:08:34.855 "block_size": 4096, 00:08:34.855 "num_blocks": 256, 00:08:34.855 "uuid": "6bcca90f-babe-479c-ae5b-3d9f92188d68", 00:08:34.855 "assigned_rate_limits": { 00:08:34.855 "rw_ios_per_sec": 0, 00:08:34.855 "rw_mbytes_per_sec": 0, 00:08:34.855 "r_mbytes_per_sec": 0, 00:08:34.855 "w_mbytes_per_sec": 0 00:08:34.855 }, 00:08:34.855 "claimed": false, 00:08:34.855 "zoned": false, 00:08:34.855 "supported_io_types": { 00:08:34.855 "read": true, 00:08:34.855 "write": true, 00:08:34.855 "unmap": true, 00:08:34.855 "flush": true, 00:08:34.855 "reset": true, 00:08:34.855 "nvme_admin": false, 00:08:34.855 "nvme_io": false, 00:08:34.855 "nvme_io_md": false, 00:08:34.855 "write_zeroes": true, 00:08:34.855 "zcopy": true, 00:08:34.855 "get_zone_info": false, 00:08:34.855 "zone_management": false, 00:08:34.855 "zone_append": false, 00:08:34.855 "compare": false, 00:08:34.855 "compare_and_write": false, 00:08:34.855 "abort": true, 00:08:34.855 "seek_hole": false, 00:08:34.855 "seek_data": false, 00:08:34.855 "copy": true, 00:08:34.855 "nvme_iov_md": false 00:08:34.855 }, 00:08:34.855 "memory_domains": [ 00:08:34.855 { 00:08:34.855 "dma_device_id": "system", 00:08:34.855 "dma_device_type": 1 00:08:34.855 }, 00:08:34.855 { 00:08:34.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.855 "dma_device_type": 2 00:08:34.855 } 00:08:34.855 ], 00:08:34.855 "driver_specific": {} 00:08:34.855 } 00:08:34.855 ]' 00:08:34.855 13:40:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:34.855 13:40:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:34.855 13:40:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.855 13:40:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.855 13:40:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:34.855 13:40:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:34.855 13:40:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:34.855 00:08:34.855 real 0m0.104s 00:08:34.855 user 0m0.070s 00:08:34.855 sys 0m0.008s 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.855 13:40:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:34.855 ************************************ 00:08:34.855 END TEST rpc_plugins 00:08:34.855 ************************************ 00:08:34.855 13:40:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:34.855 13:40:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.855 13:40:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.855 13:40:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.113 ************************************ 00:08:35.113 START TEST rpc_trace_cmd_test 00:08:35.113 ************************************ 00:08:35.113 13:40:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:35.113 13:40:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:35.113 13:40:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:35.113 13:40:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.113 13:40:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.113 13:40:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.113 13:40:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:35.113 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2116032", 00:08:35.113 "tpoint_group_mask": "0x8", 00:08:35.113 "iscsi_conn": { 00:08:35.113 "mask": "0x2", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "scsi": { 00:08:35.113 "mask": "0x4", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "bdev": { 00:08:35.113 "mask": "0x8", 00:08:35.113 "tpoint_mask": "0xffffffffffffffff" 00:08:35.113 }, 00:08:35.113 "nvmf_rdma": { 00:08:35.113 "mask": "0x10", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "nvmf_tcp": { 00:08:35.113 "mask": "0x20", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "ftl": { 00:08:35.113 "mask": "0x40", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "blobfs": { 00:08:35.113 "mask": "0x80", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "dsa": { 00:08:35.113 "mask": "0x200", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "thread": { 00:08:35.113 "mask": "0x400", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "nvme_pcie": { 00:08:35.113 "mask": "0x800", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "iaa": { 00:08:35.113 "mask": "0x1000", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "nvme_tcp": { 00:08:35.113 "mask": "0x2000", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "bdev_nvme": { 00:08:35.113 "mask": "0x4000", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "sock": { 00:08:35.113 "mask": "0x8000", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "blob": { 00:08:35.113 "mask": "0x10000", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "bdev_raid": { 00:08:35.113 "mask": "0x20000", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 }, 00:08:35.113 "scheduler": { 00:08:35.113 "mask": "0x40000", 00:08:35.113 "tpoint_mask": "0x0" 00:08:35.113 } 00:08:35.113 }' 00:08:35.113 13:40:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:35.113 13:40:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:35.113 13:40:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:35.113 13:40:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:35.113 13:40:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:35.114 13:40:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:35.114 13:40:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:35.114 13:40:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:35.114 13:40:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:35.114 13:40:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:35.114 00:08:35.114 real 0m0.175s 00:08:35.114 user 0m0.152s 00:08:35.114 sys 0m0.016s 00:08:35.114 13:40:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.114 13:40:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.114 ************************************ 00:08:35.114 END TEST rpc_trace_cmd_test 00:08:35.114 ************************************ 00:08:35.114 13:40:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:35.114 13:40:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:35.114 13:40:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:35.114 13:40:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.114 13:40:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.114 13:40:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.114 ************************************ 00:08:35.114 START TEST rpc_daemon_integrity 00:08:35.114 ************************************ 00:08:35.114 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:35.114 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:35.114 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.114 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.114 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.114 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:35.114 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:35.372 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:35.372 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:35.372 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.372 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.372 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.372 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:35.372 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:35.372 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.372 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.372 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.372 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:35.372 { 00:08:35.372 "name": "Malloc2", 00:08:35.372 "aliases": [ 00:08:35.372 "aed6fc26-cfea-400c-be59-18802bbf3941" 00:08:35.372 ], 00:08:35.372 "product_name": "Malloc disk", 00:08:35.372 "block_size": 512, 00:08:35.372 "num_blocks": 16384, 00:08:35.372 "uuid": "aed6fc26-cfea-400c-be59-18802bbf3941", 00:08:35.372 "assigned_rate_limits": { 00:08:35.372 "rw_ios_per_sec": 0, 00:08:35.372 "rw_mbytes_per_sec": 0, 00:08:35.372 "r_mbytes_per_sec": 0, 00:08:35.372 "w_mbytes_per_sec": 0 00:08:35.372 }, 00:08:35.372 "claimed": false, 00:08:35.372 "zoned": false, 00:08:35.372 "supported_io_types": { 00:08:35.372 "read": true, 00:08:35.372 "write": true, 00:08:35.372 "unmap": true, 00:08:35.372 "flush": true, 00:08:35.372 "reset": true, 00:08:35.373 "nvme_admin": false, 00:08:35.373 "nvme_io": false, 00:08:35.373 "nvme_io_md": false, 00:08:35.373 "write_zeroes": true, 00:08:35.373 "zcopy": true, 00:08:35.373 "get_zone_info": false, 00:08:35.373 "zone_management": false, 00:08:35.373 "zone_append": false, 00:08:35.373 "compare": false, 00:08:35.373 "compare_and_write": false, 00:08:35.373 "abort": true, 00:08:35.373 "seek_hole": false, 00:08:35.373 "seek_data": false, 00:08:35.373 "copy": true, 00:08:35.373 "nvme_iov_md": false 00:08:35.373 }, 00:08:35.373 "memory_domains": [ 00:08:35.373 { 00:08:35.373 "dma_device_id": "system", 00:08:35.373 "dma_device_type": 1 00:08:35.373 }, 00:08:35.373 { 00:08:35.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.373 "dma_device_type": 2 00:08:35.373 } 00:08:35.373 ], 00:08:35.373 "driver_specific": {} 00:08:35.373 } 00:08:35.373 ]' 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.373 [2024-12-05 13:40:06.703165] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:35.373 [2024-12-05 13:40:06.703202] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.373 [2024-12-05 13:40:06.703224] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1da6ba0 00:08:35.373 [2024-12-05 13:40:06.703266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.373 [2024-12-05 13:40:06.704492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.373 [2024-12-05 13:40:06.704518] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:35.373 Passthru0 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:35.373 { 00:08:35.373 "name": "Malloc2", 00:08:35.373 "aliases": [ 00:08:35.373 "aed6fc26-cfea-400c-be59-18802bbf3941" 00:08:35.373 ], 00:08:35.373 "product_name": "Malloc disk", 00:08:35.373 "block_size": 512, 00:08:35.373 "num_blocks": 16384, 00:08:35.373 "uuid": "aed6fc26-cfea-400c-be59-18802bbf3941", 00:08:35.373 "assigned_rate_limits": { 00:08:35.373 "rw_ios_per_sec": 0, 00:08:35.373 "rw_mbytes_per_sec": 0, 00:08:35.373 "r_mbytes_per_sec": 0, 00:08:35.373 "w_mbytes_per_sec": 0 00:08:35.373 }, 00:08:35.373 "claimed": true, 00:08:35.373 "claim_type": "exclusive_write", 00:08:35.373 "zoned": false, 00:08:35.373 "supported_io_types": { 00:08:35.373 "read": true, 00:08:35.373 "write": true, 00:08:35.373 "unmap": true, 00:08:35.373 "flush": true, 00:08:35.373 "reset": true, 00:08:35.373 "nvme_admin": false, 00:08:35.373 "nvme_io": false, 00:08:35.373 "nvme_io_md": false, 00:08:35.373 "write_zeroes": true, 00:08:35.373 "zcopy": true, 00:08:35.373 "get_zone_info": false, 00:08:35.373 "zone_management": false, 00:08:35.373 "zone_append": false, 00:08:35.373 "compare": false, 00:08:35.373 "compare_and_write": false, 00:08:35.373 "abort": true, 00:08:35.373 "seek_hole": false, 00:08:35.373 "seek_data": false, 00:08:35.373 "copy": true, 00:08:35.373 "nvme_iov_md": false 00:08:35.373 }, 00:08:35.373 "memory_domains": [ 00:08:35.373 { 00:08:35.373 "dma_device_id": "system", 00:08:35.373 "dma_device_type": 1 00:08:35.373 }, 00:08:35.373 { 00:08:35.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.373 "dma_device_type": 2 00:08:35.373 } 00:08:35.373 ], 00:08:35.373 "driver_specific": {} 00:08:35.373 }, 00:08:35.373 { 00:08:35.373 "name": "Passthru0", 00:08:35.373 "aliases": [ 00:08:35.373 "3ee5d7b3-81df-5bd2-ab56-37d2e255f09b" 00:08:35.373 ], 00:08:35.373 "product_name": "passthru", 00:08:35.373 "block_size": 512, 00:08:35.373 "num_blocks": 16384, 00:08:35.373 "uuid": "3ee5d7b3-81df-5bd2-ab56-37d2e255f09b", 00:08:35.373 "assigned_rate_limits": { 00:08:35.373 "rw_ios_per_sec": 0, 00:08:35.373 "rw_mbytes_per_sec": 0, 00:08:35.373 "r_mbytes_per_sec": 0, 00:08:35.373 "w_mbytes_per_sec": 0 00:08:35.373 }, 00:08:35.373 "claimed": false, 00:08:35.373 "zoned": false, 00:08:35.373 "supported_io_types": { 00:08:35.373 "read": true, 00:08:35.373 "write": true, 00:08:35.373 "unmap": true, 00:08:35.373 "flush": true, 00:08:35.373 "reset": true, 00:08:35.373 "nvme_admin": false, 00:08:35.373 "nvme_io": false, 00:08:35.373 "nvme_io_md": false, 00:08:35.373 "write_zeroes": true, 00:08:35.373 "zcopy": true, 00:08:35.373 "get_zone_info": false, 00:08:35.373 "zone_management": false, 00:08:35.373 "zone_append": false, 00:08:35.373 "compare": false, 00:08:35.373 "compare_and_write": false, 00:08:35.373 "abort": true, 00:08:35.373 "seek_hole": false, 00:08:35.373 "seek_data": false, 00:08:35.373 "copy": true, 00:08:35.373 "nvme_iov_md": false 00:08:35.373 }, 00:08:35.373 "memory_domains": [ 00:08:35.373 { 00:08:35.373 "dma_device_id": "system", 00:08:35.373 "dma_device_type": 1 00:08:35.373 }, 00:08:35.373 { 00:08:35.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.373 "dma_device_type": 2 00:08:35.373 } 00:08:35.373 ], 00:08:35.373 "driver_specific": { 00:08:35.373 "passthru": { 00:08:35.373 "name": "Passthru0", 00:08:35.373 "base_bdev_name": "Malloc2" 00:08:35.373 } 00:08:35.373 } 00:08:35.373 } 00:08:35.373 ]' 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:35.373 00:08:35.373 real 0m0.212s 00:08:35.373 user 0m0.138s 00:08:35.373 sys 0m0.020s 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.373 13:40:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.373 ************************************ 00:08:35.373 END TEST rpc_daemon_integrity 00:08:35.373 ************************************ 00:08:35.373 13:40:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:35.373 13:40:06 rpc -- rpc/rpc.sh@84 -- # killprocess 2116032 00:08:35.373 13:40:06 rpc -- common/autotest_common.sh@954 -- # '[' -z 2116032 ']' 00:08:35.373 13:40:06 rpc -- common/autotest_common.sh@958 -- # kill -0 2116032 00:08:35.373 13:40:06 rpc -- common/autotest_common.sh@959 -- # uname 00:08:35.373 13:40:06 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.373 13:40:06 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2116032 00:08:35.373 13:40:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.373 13:40:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.373 13:40:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2116032' 00:08:35.373 killing process with pid 2116032 00:08:35.373 13:40:06 rpc -- common/autotest_common.sh@973 -- # kill 2116032 00:08:35.373 13:40:06 rpc -- common/autotest_common.sh@978 -- # wait 2116032 00:08:35.939 00:08:35.939 real 0m1.928s 00:08:35.939 user 0m2.354s 00:08:35.939 sys 0m0.620s 00:08:35.939 13:40:07 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.939 13:40:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.939 ************************************ 00:08:35.939 END TEST rpc 00:08:35.939 ************************************ 00:08:35.939 13:40:07 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:35.939 13:40:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.939 13:40:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.939 13:40:07 -- common/autotest_common.sh@10 -- # set +x 00:08:35.939 ************************************ 00:08:35.939 START TEST skip_rpc 00:08:35.939 ************************************ 00:08:35.940 13:40:07 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:35.940 * Looking for test storage... 00:08:35.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:35.940 13:40:07 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:35.940 13:40:07 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:35.940 13:40:07 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:36.198 13:40:07 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.198 13:40:07 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:36.198 13:40:07 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.198 13:40:07 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:36.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.198 --rc genhtml_branch_coverage=1 00:08:36.198 --rc genhtml_function_coverage=1 00:08:36.198 --rc genhtml_legend=1 00:08:36.198 --rc geninfo_all_blocks=1 00:08:36.198 --rc geninfo_unexecuted_blocks=1 00:08:36.198 00:08:36.198 ' 00:08:36.198 13:40:07 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:36.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.198 --rc genhtml_branch_coverage=1 00:08:36.198 --rc genhtml_function_coverage=1 00:08:36.198 --rc genhtml_legend=1 00:08:36.198 --rc geninfo_all_blocks=1 00:08:36.198 --rc geninfo_unexecuted_blocks=1 00:08:36.198 00:08:36.198 ' 00:08:36.198 13:40:07 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:36.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.198 --rc genhtml_branch_coverage=1 00:08:36.198 --rc genhtml_function_coverage=1 00:08:36.198 --rc genhtml_legend=1 00:08:36.198 --rc geninfo_all_blocks=1 00:08:36.198 --rc geninfo_unexecuted_blocks=1 00:08:36.198 00:08:36.198 ' 00:08:36.198 13:40:07 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:36.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.198 --rc genhtml_branch_coverage=1 00:08:36.198 --rc genhtml_function_coverage=1 00:08:36.198 --rc genhtml_legend=1 00:08:36.198 --rc geninfo_all_blocks=1 00:08:36.198 --rc geninfo_unexecuted_blocks=1 00:08:36.198 00:08:36.198 ' 00:08:36.198 13:40:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:36.198 13:40:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:36.198 13:40:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:36.198 13:40:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.198 13:40:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.198 13:40:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.198 ************************************ 00:08:36.198 START TEST skip_rpc 00:08:36.198 ************************************ 00:08:36.198 13:40:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:36.198 13:40:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2116410 00:08:36.198 13:40:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:36.198 13:40:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:36.198 13:40:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:36.198 [2024-12-05 13:40:07.562956] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:08:36.198 [2024-12-05 13:40:07.563023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116410 ] 00:08:36.198 [2024-12-05 13:40:07.626230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.198 [2024-12-05 13:40:07.682058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2116410 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2116410 ']' 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2116410 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2116410 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2116410' 00:08:41.460 killing process with pid 2116410 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2116410 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2116410 00:08:41.460 00:08:41.460 real 0m5.448s 00:08:41.460 user 0m5.151s 00:08:41.460 sys 0m0.315s 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.460 13:40:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.460 ************************************ 00:08:41.460 END TEST skip_rpc 00:08:41.460 ************************************ 00:08:41.460 13:40:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:41.460 13:40:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.460 13:40:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.460 13:40:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.719 ************************************ 00:08:41.719 START TEST skip_rpc_with_json 00:08:41.719 ************************************ 00:08:41.719 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:41.719 13:40:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:41.719 13:40:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2117094 00:08:41.719 13:40:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:41.719 13:40:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:41.719 13:40:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2117094 00:08:41.719 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2117094 ']' 00:08:41.719 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.719 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.719 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.719 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.719 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:41.719 [2024-12-05 13:40:13.060823] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:08:41.719 [2024-12-05 13:40:13.060898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117094 ] 00:08:41.719 [2024-12-05 13:40:13.127573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.719 [2024-12-05 13:40:13.179633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.976 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.976 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:41.976 13:40:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:41.976 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.976 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:41.976 [2024-12-05 13:40:13.437810] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:41.976 request: 00:08:41.976 { 00:08:41.976 "trtype": "tcp", 00:08:41.976 "method": "nvmf_get_transports", 00:08:41.976 "req_id": 1 00:08:41.976 } 00:08:41.976 Got JSON-RPC error response 00:08:41.976 response: 00:08:41.976 { 00:08:41.976 "code": -19, 00:08:41.976 "message": "No such device" 00:08:41.976 } 00:08:41.976 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:41.976 13:40:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:41.976 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.976 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:41.976 [2024-12-05 13:40:13.445900] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.976 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.976 13:40:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:41.976 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.976 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:42.234 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.234 13:40:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:42.234 { 00:08:42.234 "subsystems": [ 00:08:42.234 { 00:08:42.234 "subsystem": "fsdev", 00:08:42.234 "config": [ 00:08:42.234 { 00:08:42.234 "method": "fsdev_set_opts", 00:08:42.234 "params": { 00:08:42.234 "fsdev_io_pool_size": 65535, 00:08:42.234 "fsdev_io_cache_size": 256 00:08:42.234 } 00:08:42.234 } 00:08:42.234 ] 00:08:42.234 }, 00:08:42.234 { 00:08:42.234 "subsystem": "vfio_user_target", 00:08:42.234 "config": null 00:08:42.234 }, 00:08:42.234 { 00:08:42.234 "subsystem": "keyring", 00:08:42.234 "config": [] 00:08:42.234 }, 00:08:42.234 { 00:08:42.234 "subsystem": "iobuf", 00:08:42.234 "config": [ 00:08:42.234 { 00:08:42.234 "method": "iobuf_set_options", 00:08:42.234 "params": { 00:08:42.234 "small_pool_count": 8192, 00:08:42.234 "large_pool_count": 1024, 00:08:42.234 "small_bufsize": 8192, 00:08:42.234 "large_bufsize": 135168, 00:08:42.234 "enable_numa": false 00:08:42.234 } 00:08:42.234 } 00:08:42.234 ] 00:08:42.234 }, 00:08:42.234 { 00:08:42.234 "subsystem": "sock", 00:08:42.234 "config": [ 00:08:42.234 { 00:08:42.234 "method": "sock_set_default_impl", 00:08:42.234 "params": { 00:08:42.234 "impl_name": "posix" 00:08:42.234 } 00:08:42.234 }, 00:08:42.234 { 00:08:42.234 "method": "sock_impl_set_options", 00:08:42.234 "params": { 00:08:42.234 "impl_name": "ssl", 00:08:42.234 "recv_buf_size": 4096, 00:08:42.234 "send_buf_size": 4096, 00:08:42.234 "enable_recv_pipe": true, 00:08:42.234 "enable_quickack": false, 00:08:42.234 "enable_placement_id": 0, 00:08:42.234 "enable_zerocopy_send_server": true, 00:08:42.234 "enable_zerocopy_send_client": false, 00:08:42.234 "zerocopy_threshold": 0, 00:08:42.234 "tls_version": 0, 00:08:42.234 "enable_ktls": false 00:08:42.234 } 00:08:42.234 }, 00:08:42.234 { 00:08:42.234 "method": "sock_impl_set_options", 00:08:42.234 "params": { 00:08:42.234 "impl_name": "posix", 00:08:42.234 "recv_buf_size": 2097152, 00:08:42.234 "send_buf_size": 2097152, 00:08:42.234 "enable_recv_pipe": true, 00:08:42.234 "enable_quickack": false, 00:08:42.234 "enable_placement_id": 0, 00:08:42.234 "enable_zerocopy_send_server": true, 00:08:42.234 "enable_zerocopy_send_client": false, 00:08:42.234 "zerocopy_threshold": 0, 00:08:42.234 "tls_version": 0, 00:08:42.234 "enable_ktls": false 00:08:42.234 } 00:08:42.234 } 00:08:42.234 ] 00:08:42.234 }, 00:08:42.234 { 00:08:42.234 "subsystem": "vmd", 00:08:42.234 "config": [] 00:08:42.234 }, 00:08:42.234 { 00:08:42.234 "subsystem": "accel", 00:08:42.234 "config": [ 00:08:42.234 { 00:08:42.234 "method": "accel_set_options", 00:08:42.234 "params": { 00:08:42.234 "small_cache_size": 128, 00:08:42.234 "large_cache_size": 16, 00:08:42.234 "task_count": 2048, 00:08:42.234 "sequence_count": 2048, 00:08:42.234 "buf_count": 2048 00:08:42.234 } 00:08:42.234 } 00:08:42.234 ] 00:08:42.234 }, 00:08:42.234 { 00:08:42.234 "subsystem": "bdev", 00:08:42.234 "config": [ 00:08:42.234 { 00:08:42.234 "method": "bdev_set_options", 00:08:42.234 "params": { 00:08:42.234 "bdev_io_pool_size": 65535, 00:08:42.234 "bdev_io_cache_size": 256, 00:08:42.234 "bdev_auto_examine": true, 00:08:42.234 "iobuf_small_cache_size": 128, 00:08:42.234 "iobuf_large_cache_size": 16 00:08:42.234 } 00:08:42.234 }, 00:08:42.234 { 00:08:42.234 "method": "bdev_raid_set_options", 00:08:42.234 "params": { 00:08:42.234 "process_window_size_kb": 1024, 00:08:42.234 "process_max_bandwidth_mb_sec": 0 00:08:42.234 } 00:08:42.234 }, 00:08:42.234 { 00:08:42.234 "method": "bdev_iscsi_set_options", 00:08:42.234 "params": { 00:08:42.234 "timeout_sec": 30 00:08:42.234 } 00:08:42.234 }, 00:08:42.234 { 00:08:42.234 "method": "bdev_nvme_set_options", 00:08:42.234 "params": { 00:08:42.234 "action_on_timeout": "none", 00:08:42.234 "timeout_us": 0, 00:08:42.234 "timeout_admin_us": 0, 00:08:42.234 "keep_alive_timeout_ms": 10000, 00:08:42.234 "arbitration_burst": 0, 00:08:42.234 "low_priority_weight": 0, 00:08:42.234 "medium_priority_weight": 0, 00:08:42.234 "high_priority_weight": 0, 00:08:42.234 "nvme_adminq_poll_period_us": 10000, 00:08:42.234 "nvme_ioq_poll_period_us": 0, 00:08:42.234 "io_queue_requests": 0, 00:08:42.234 "delay_cmd_submit": true, 00:08:42.234 "transport_retry_count": 4, 00:08:42.234 "bdev_retry_count": 3, 00:08:42.234 "transport_ack_timeout": 0, 00:08:42.234 "ctrlr_loss_timeout_sec": 0, 00:08:42.234 "reconnect_delay_sec": 0, 00:08:42.234 "fast_io_fail_timeout_sec": 0, 00:08:42.234 "disable_auto_failback": false, 00:08:42.234 "generate_uuids": false, 00:08:42.234 "transport_tos": 0, 00:08:42.234 "nvme_error_stat": false, 00:08:42.234 "rdma_srq_size": 0, 00:08:42.234 "io_path_stat": false, 00:08:42.234 "allow_accel_sequence": false, 00:08:42.234 "rdma_max_cq_size": 0, 00:08:42.234 "rdma_cm_event_timeout_ms": 0, 00:08:42.234 "dhchap_digests": [ 00:08:42.234 "sha256", 00:08:42.234 "sha384", 00:08:42.234 "sha512" 00:08:42.234 ], 00:08:42.234 "dhchap_dhgroups": [ 00:08:42.234 "null", 00:08:42.234 "ffdhe2048", 00:08:42.234 "ffdhe3072", 00:08:42.234 "ffdhe4096", 00:08:42.234 "ffdhe6144", 00:08:42.234 "ffdhe8192" 00:08:42.234 ] 00:08:42.234 } 00:08:42.234 }, 00:08:42.234 { 00:08:42.234 "method": "bdev_nvme_set_hotplug", 00:08:42.234 "params": { 00:08:42.234 "period_us": 100000, 00:08:42.234 "enable": false 00:08:42.234 } 00:08:42.234 }, 00:08:42.235 { 00:08:42.235 "method": "bdev_wait_for_examine" 00:08:42.235 } 00:08:42.235 ] 00:08:42.235 }, 00:08:42.235 { 00:08:42.235 "subsystem": "scsi", 00:08:42.235 "config": null 00:08:42.235 }, 00:08:42.235 { 00:08:42.235 "subsystem": "scheduler", 00:08:42.235 "config": [ 00:08:42.235 { 00:08:42.235 "method": "framework_set_scheduler", 00:08:42.235 "params": { 00:08:42.235 "name": "static" 00:08:42.235 } 00:08:42.235 } 00:08:42.235 ] 00:08:42.235 }, 00:08:42.235 { 00:08:42.235 "subsystem": "vhost_scsi", 00:08:42.235 "config": [] 00:08:42.235 }, 00:08:42.235 { 00:08:42.235 "subsystem": "vhost_blk", 00:08:42.235 "config": [] 00:08:42.235 }, 00:08:42.235 { 00:08:42.235 "subsystem": "ublk", 00:08:42.235 "config": [] 00:08:42.235 }, 00:08:42.235 { 00:08:42.235 "subsystem": "nbd", 00:08:42.235 "config": [] 00:08:42.235 }, 00:08:42.235 { 00:08:42.235 "subsystem": "nvmf", 00:08:42.235 "config": [ 00:08:42.235 { 00:08:42.235 "method": "nvmf_set_config", 00:08:42.235 "params": { 00:08:42.235 "discovery_filter": "match_any", 00:08:42.235 "admin_cmd_passthru": { 00:08:42.235 "identify_ctrlr": false 00:08:42.235 }, 00:08:42.235 "dhchap_digests": [ 00:08:42.235 "sha256", 00:08:42.235 "sha384", 00:08:42.235 "sha512" 00:08:42.235 ], 00:08:42.235 "dhchap_dhgroups": [ 00:08:42.235 "null", 00:08:42.235 "ffdhe2048", 00:08:42.235 "ffdhe3072", 00:08:42.235 "ffdhe4096", 00:08:42.235 "ffdhe6144", 00:08:42.235 "ffdhe8192" 00:08:42.235 ] 00:08:42.235 } 00:08:42.235 }, 00:08:42.235 { 00:08:42.235 "method": "nvmf_set_max_subsystems", 00:08:42.235 "params": { 00:08:42.235 "max_subsystems": 1024 00:08:42.235 } 00:08:42.235 }, 00:08:42.235 { 00:08:42.235 "method": "nvmf_set_crdt", 00:08:42.235 "params": { 00:08:42.235 "crdt1": 0, 00:08:42.235 "crdt2": 0, 00:08:42.235 "crdt3": 0 00:08:42.235 } 00:08:42.235 }, 00:08:42.235 { 00:08:42.235 "method": "nvmf_create_transport", 00:08:42.235 "params": { 00:08:42.235 "trtype": "TCP", 00:08:42.235 "max_queue_depth": 128, 00:08:42.235 "max_io_qpairs_per_ctrlr": 127, 00:08:42.235 "in_capsule_data_size": 4096, 00:08:42.235 "max_io_size": 131072, 00:08:42.235 "io_unit_size": 131072, 00:08:42.235 "max_aq_depth": 128, 00:08:42.235 "num_shared_buffers": 511, 00:08:42.235 "buf_cache_size": 4294967295, 00:08:42.235 "dif_insert_or_strip": false, 00:08:42.235 "zcopy": false, 00:08:42.235 "c2h_success": true, 00:08:42.235 "sock_priority": 0, 00:08:42.235 "abort_timeout_sec": 1, 00:08:42.235 "ack_timeout": 0, 00:08:42.235 "data_wr_pool_size": 0 00:08:42.235 } 00:08:42.235 } 00:08:42.235 ] 00:08:42.235 }, 00:08:42.235 { 00:08:42.235 "subsystem": "iscsi", 00:08:42.235 "config": [ 00:08:42.235 { 00:08:42.235 "method": "iscsi_set_options", 00:08:42.235 "params": { 00:08:42.235 "node_base": "iqn.2016-06.io.spdk", 00:08:42.235 "max_sessions": 128, 00:08:42.235 "max_connections_per_session": 2, 00:08:42.235 "max_queue_depth": 64, 00:08:42.235 "default_time2wait": 2, 00:08:42.235 "default_time2retain": 20, 00:08:42.235 "first_burst_length": 8192, 00:08:42.235 "immediate_data": true, 00:08:42.235 "allow_duplicated_isid": false, 00:08:42.235 "error_recovery_level": 0, 00:08:42.235 "nop_timeout": 60, 00:08:42.235 "nop_in_interval": 30, 00:08:42.235 "disable_chap": false, 00:08:42.235 "require_chap": false, 00:08:42.235 "mutual_chap": false, 00:08:42.235 "chap_group": 0, 00:08:42.235 "max_large_datain_per_connection": 64, 00:08:42.235 "max_r2t_per_connection": 4, 00:08:42.235 "pdu_pool_size": 36864, 00:08:42.235 "immediate_data_pool_size": 16384, 00:08:42.235 "data_out_pool_size": 2048 00:08:42.235 } 00:08:42.235 } 00:08:42.235 ] 00:08:42.235 } 00:08:42.235 ] 00:08:42.235 } 00:08:42.235 13:40:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:42.235 13:40:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2117094 00:08:42.235 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2117094 ']' 00:08:42.235 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2117094 00:08:42.235 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:42.235 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.235 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117094 00:08:42.235 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.235 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.235 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117094' 00:08:42.235 killing process with pid 2117094 00:08:42.235 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2117094 00:08:42.235 13:40:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2117094 00:08:42.801 13:40:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2117233 00:08:42.801 13:40:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:42.801 13:40:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2117233 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2117233 ']' 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2117233 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117233 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117233' 00:08:48.083 killing process with pid 2117233 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2117233 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2117233 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:48.083 00:08:48.083 real 0m6.489s 00:08:48.083 user 0m6.154s 00:08:48.083 sys 0m0.646s 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:48.083 ************************************ 00:08:48.083 END TEST skip_rpc_with_json 00:08:48.083 ************************************ 00:08:48.083 13:40:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:48.083 13:40:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.083 13:40:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.083 13:40:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.083 ************************************ 00:08:48.083 START TEST skip_rpc_with_delay 00:08:48.083 ************************************ 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:48.083 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:48.083 [2024-12-05 13:40:19.602936] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:48.342 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:48.342 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.342 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.342 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.342 00:08:48.342 real 0m0.076s 00:08:48.342 user 0m0.052s 00:08:48.342 sys 0m0.023s 00:08:48.342 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.342 13:40:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:48.342 ************************************ 00:08:48.342 END TEST skip_rpc_with_delay 00:08:48.342 ************************************ 00:08:48.342 13:40:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:48.342 13:40:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:48.342 13:40:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:48.342 13:40:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.342 13:40:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.342 13:40:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.342 ************************************ 00:08:48.342 START TEST exit_on_failed_rpc_init 00:08:48.342 ************************************ 00:08:48.342 13:40:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:48.342 13:40:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2117951 00:08:48.342 13:40:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:48.342 13:40:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2117951 00:08:48.342 13:40:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2117951 ']' 00:08:48.342 13:40:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.342 13:40:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.342 13:40:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.342 13:40:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.342 13:40:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:48.342 [2024-12-05 13:40:19.727705] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:08:48.342 [2024-12-05 13:40:19.727814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117951 ] 00:08:48.342 [2024-12-05 13:40:19.792549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.342 [2024-12-05 13:40:19.850591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:48.909 [2024-12-05 13:40:20.189812] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:08:48.909 [2024-12-05 13:40:20.189906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117961 ] 00:08:48.909 [2024-12-05 13:40:20.257597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.909 [2024-12-05 13:40:20.315966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.909 [2024-12-05 13:40:20.316080] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:48.909 [2024-12-05 13:40:20.316100] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:48.909 [2024-12-05 13:40:20.316111] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2117951 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2117951 ']' 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2117951 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117951 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117951' 00:08:48.909 killing process with pid 2117951 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2117951 00:08:48.909 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2117951 00:08:49.477 00:08:49.477 real 0m1.181s 00:08:49.477 user 0m1.290s 00:08:49.477 sys 0m0.443s 00:08:49.477 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.477 13:40:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:49.477 ************************************ 00:08:49.477 END TEST exit_on_failed_rpc_init 00:08:49.477 ************************************ 00:08:49.477 13:40:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:49.477 00:08:49.477 real 0m13.545s 00:08:49.477 user 0m12.817s 00:08:49.477 sys 0m1.628s 00:08:49.477 13:40:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.477 13:40:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.477 ************************************ 00:08:49.477 END TEST skip_rpc 00:08:49.477 ************************************ 00:08:49.477 13:40:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:49.477 13:40:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.477 13:40:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.477 13:40:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.477 ************************************ 00:08:49.477 START TEST rpc_client 00:08:49.477 ************************************ 00:08:49.477 13:40:20 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:49.477 * Looking for test storage... 00:08:49.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:49.477 13:40:20 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:49.477 13:40:20 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:08:49.477 13:40:20 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:49.736 13:40:21 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.736 13:40:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:49.736 13:40:21 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.736 13:40:21 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:49.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.736 --rc genhtml_branch_coverage=1 00:08:49.736 --rc genhtml_function_coverage=1 00:08:49.736 --rc genhtml_legend=1 00:08:49.736 --rc geninfo_all_blocks=1 00:08:49.736 --rc geninfo_unexecuted_blocks=1 00:08:49.736 00:08:49.736 ' 00:08:49.736 13:40:21 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:49.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.736 --rc genhtml_branch_coverage=1 00:08:49.736 --rc genhtml_function_coverage=1 00:08:49.736 --rc genhtml_legend=1 00:08:49.736 --rc geninfo_all_blocks=1 00:08:49.736 --rc geninfo_unexecuted_blocks=1 00:08:49.736 00:08:49.736 ' 00:08:49.736 13:40:21 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:49.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.736 --rc genhtml_branch_coverage=1 00:08:49.736 --rc genhtml_function_coverage=1 00:08:49.736 --rc genhtml_legend=1 00:08:49.736 --rc geninfo_all_blocks=1 00:08:49.736 --rc geninfo_unexecuted_blocks=1 00:08:49.736 00:08:49.736 ' 00:08:49.736 13:40:21 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:49.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.736 --rc genhtml_branch_coverage=1 00:08:49.736 --rc genhtml_function_coverage=1 00:08:49.736 --rc genhtml_legend=1 00:08:49.736 --rc geninfo_all_blocks=1 00:08:49.736 --rc geninfo_unexecuted_blocks=1 00:08:49.736 00:08:49.736 ' 00:08:49.736 13:40:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:49.736 OK 00:08:49.736 13:40:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:49.736 00:08:49.736 real 0m0.152s 00:08:49.736 user 0m0.098s 00:08:49.736 sys 0m0.064s 00:08:49.736 13:40:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.736 13:40:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:49.736 ************************************ 00:08:49.736 END TEST rpc_client 00:08:49.736 ************************************ 00:08:49.736 13:40:21 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:49.736 13:40:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.736 13:40:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.736 13:40:21 -- common/autotest_common.sh@10 -- # set +x 00:08:49.736 ************************************ 00:08:49.736 START TEST json_config 00:08:49.736 ************************************ 00:08:49.736 13:40:21 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:49.736 13:40:21 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:49.736 13:40:21 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:08:49.736 13:40:21 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:49.736 13:40:21 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:49.736 13:40:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.736 13:40:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.736 13:40:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.736 13:40:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.736 13:40:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.736 13:40:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.736 13:40:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.736 13:40:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.736 13:40:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.736 13:40:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.736 13:40:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.736 13:40:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:49.736 13:40:21 json_config -- scripts/common.sh@345 -- # : 1 00:08:49.736 13:40:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.736 13:40:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.736 13:40:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:49.736 13:40:21 json_config -- scripts/common.sh@353 -- # local d=1 00:08:49.736 13:40:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.736 13:40:21 json_config -- scripts/common.sh@355 -- # echo 1 00:08:49.736 13:40:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.736 13:40:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:49.736 13:40:21 json_config -- scripts/common.sh@353 -- # local d=2 00:08:49.736 13:40:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.736 13:40:21 json_config -- scripts/common.sh@355 -- # echo 2 00:08:49.736 13:40:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.736 13:40:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.736 13:40:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.737 13:40:21 json_config -- scripts/common.sh@368 -- # return 0 00:08:49.737 13:40:21 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.737 13:40:21 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:49.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.737 --rc genhtml_branch_coverage=1 00:08:49.737 --rc genhtml_function_coverage=1 00:08:49.737 --rc genhtml_legend=1 00:08:49.737 --rc geninfo_all_blocks=1 00:08:49.737 --rc geninfo_unexecuted_blocks=1 00:08:49.737 00:08:49.737 ' 00:08:49.737 13:40:21 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:49.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.737 --rc genhtml_branch_coverage=1 00:08:49.737 --rc genhtml_function_coverage=1 00:08:49.737 --rc genhtml_legend=1 00:08:49.737 --rc geninfo_all_blocks=1 00:08:49.737 --rc geninfo_unexecuted_blocks=1 00:08:49.737 00:08:49.737 ' 00:08:49.737 13:40:21 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:49.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.737 --rc genhtml_branch_coverage=1 00:08:49.737 --rc genhtml_function_coverage=1 00:08:49.737 --rc genhtml_legend=1 00:08:49.737 --rc geninfo_all_blocks=1 00:08:49.737 --rc geninfo_unexecuted_blocks=1 00:08:49.737 00:08:49.737 ' 00:08:49.737 13:40:21 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:49.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.737 --rc genhtml_branch_coverage=1 00:08:49.737 --rc genhtml_function_coverage=1 00:08:49.737 --rc genhtml_legend=1 00:08:49.737 --rc geninfo_all_blocks=1 00:08:49.737 --rc geninfo_unexecuted_blocks=1 00:08:49.737 00:08:49.737 ' 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.737 13:40:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.737 13:40:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.737 13:40:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.737 13:40:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.737 13:40:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.737 13:40:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.737 13:40:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.737 13:40:21 json_config -- paths/export.sh@5 -- # export PATH 00:08:49.737 13:40:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@51 -- # : 0 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.737 13:40:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:08:49.737 INFO: JSON configuration test init 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:08:49.737 13:40:21 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:08:49.737 13:40:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.737 13:40:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:49.996 13:40:21 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:08:49.996 13:40:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.996 13:40:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:49.996 13:40:21 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:08:49.996 13:40:21 json_config -- json_config/common.sh@9 -- # local app=target 00:08:49.996 13:40:21 json_config -- json_config/common.sh@10 -- # shift 00:08:49.996 13:40:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:49.996 13:40:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:49.996 13:40:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:49.996 13:40:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:49.996 13:40:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:49.996 13:40:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2118227 00:08:49.996 13:40:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:49.996 Waiting for target to run... 00:08:49.996 13:40:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:49.996 13:40:21 json_config -- json_config/common.sh@25 -- # waitforlisten 2118227 /var/tmp/spdk_tgt.sock 00:08:49.996 13:40:21 json_config -- common/autotest_common.sh@835 -- # '[' -z 2118227 ']' 00:08:49.996 13:40:21 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:49.996 13:40:21 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.996 13:40:21 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:49.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:49.996 13:40:21 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.996 13:40:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:49.996 [2024-12-05 13:40:21.319273] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:08:49.996 [2024-12-05 13:40:21.319361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118227 ] 00:08:50.564 [2024-12-05 13:40:21.853720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.564 [2024-12-05 13:40:21.904629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.821 13:40:22 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.821 13:40:22 json_config -- common/autotest_common.sh@868 -- # return 0 00:08:50.821 13:40:22 json_config -- json_config/common.sh@26 -- # echo '' 00:08:50.821 00:08:50.821 13:40:22 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:08:50.821 13:40:22 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:08:50.821 13:40:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:50.821 13:40:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:50.821 13:40:22 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:08:50.821 13:40:22 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:08:50.821 13:40:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:50.821 13:40:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:50.821 13:40:22 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:51.079 13:40:22 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:08:51.079 13:40:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:54.358 13:40:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.358 13:40:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:54.358 13:40:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@51 -- # local get_types 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@54 -- # sort 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:08:54.358 13:40:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.358 13:40:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@62 -- # return 0 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:08:54.358 13:40:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.358 13:40:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:08:54.358 13:40:25 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:54.358 13:40:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:54.616 MallocForNvmf0 00:08:54.616 13:40:26 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:54.616 13:40:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:54.873 MallocForNvmf1 00:08:54.873 13:40:26 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:54.873 13:40:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:55.131 [2024-12-05 13:40:26.631030] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.131 13:40:26 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:55.131 13:40:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:55.695 13:40:26 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:55.695 13:40:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:55.695 13:40:27 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:55.695 13:40:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:55.953 13:40:27 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:55.953 13:40:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:56.210 [2024-12-05 13:40:27.710551] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:56.210 13:40:27 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:08:56.210 13:40:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.210 13:40:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:56.468 13:40:27 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:08:56.468 13:40:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.468 13:40:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:56.468 13:40:27 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:08:56.468 13:40:27 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:56.468 13:40:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:56.725 MallocBdevForConfigChangeCheck 00:08:56.725 13:40:28 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:08:56.725 13:40:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.725 13:40:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:56.726 13:40:28 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:08:56.726 13:40:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:56.983 13:40:28 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:08:56.983 INFO: shutting down applications... 00:08:56.983 13:40:28 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:08:56.983 13:40:28 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:08:56.984 13:40:28 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:08:56.984 13:40:28 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:58.881 Calling clear_iscsi_subsystem 00:08:58.881 Calling clear_nvmf_subsystem 00:08:58.881 Calling clear_nbd_subsystem 00:08:58.881 Calling clear_ublk_subsystem 00:08:58.881 Calling clear_vhost_blk_subsystem 00:08:58.881 Calling clear_vhost_scsi_subsystem 00:08:58.881 Calling clear_bdev_subsystem 00:08:58.881 13:40:30 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:58.881 13:40:30 json_config -- json_config/json_config.sh@350 -- # count=100 00:08:58.881 13:40:30 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:08:58.881 13:40:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:58.881 13:40:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:58.881 13:40:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:59.139 13:40:30 json_config -- json_config/json_config.sh@352 -- # break 00:08:59.139 13:40:30 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:08:59.139 13:40:30 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:08:59.139 13:40:30 json_config -- json_config/common.sh@31 -- # local app=target 00:08:59.139 13:40:30 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:59.139 13:40:30 json_config -- json_config/common.sh@35 -- # [[ -n 2118227 ]] 00:08:59.139 13:40:30 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2118227 00:08:59.139 13:40:30 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:59.139 13:40:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:59.139 13:40:30 json_config -- json_config/common.sh@41 -- # kill -0 2118227 00:08:59.139 13:40:30 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:59.707 13:40:31 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:59.707 13:40:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:59.707 13:40:31 json_config -- json_config/common.sh@41 -- # kill -0 2118227 00:08:59.707 13:40:31 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:59.707 13:40:31 json_config -- json_config/common.sh@43 -- # break 00:08:59.707 13:40:31 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:59.707 13:40:31 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:59.707 SPDK target shutdown done 00:08:59.707 13:40:31 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:08:59.707 INFO: relaunching applications... 00:08:59.707 13:40:31 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:59.707 13:40:31 json_config -- json_config/common.sh@9 -- # local app=target 00:08:59.707 13:40:31 json_config -- json_config/common.sh@10 -- # shift 00:08:59.707 13:40:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:59.707 13:40:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:59.707 13:40:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:59.707 13:40:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:59.707 13:40:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:59.707 13:40:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2119539 00:08:59.707 13:40:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:59.707 13:40:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:59.707 Waiting for target to run... 00:08:59.707 13:40:31 json_config -- json_config/common.sh@25 -- # waitforlisten 2119539 /var/tmp/spdk_tgt.sock 00:08:59.707 13:40:31 json_config -- common/autotest_common.sh@835 -- # '[' -z 2119539 ']' 00:08:59.707 13:40:31 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:59.708 13:40:31 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.708 13:40:31 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:59.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:59.708 13:40:31 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.708 13:40:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:59.708 [2024-12-05 13:40:31.086343] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:08:59.708 [2024-12-05 13:40:31.086449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119539 ] 00:08:59.963 [2024-12-05 13:40:31.446697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.222 [2024-12-05 13:40:31.489526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.604 [2024-12-05 13:40:34.531314] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.604 [2024-12-05 13:40:34.563803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:03.604 13:40:34 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.604 13:40:34 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:03.604 13:40:34 json_config -- json_config/common.sh@26 -- # echo '' 00:09:03.604 00:09:03.604 13:40:34 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:09:03.604 13:40:34 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:03.604 INFO: Checking if target configuration is the same... 00:09:03.604 13:40:34 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:03.604 13:40:34 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:09:03.604 13:40:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:03.604 + '[' 2 -ne 2 ']' 00:09:03.604 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:03.604 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:03.604 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:03.604 +++ basename /dev/fd/62 00:09:03.604 ++ mktemp /tmp/62.XXX 00:09:03.604 + tmp_file_1=/tmp/62.9RM 00:09:03.604 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:03.604 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:03.604 + tmp_file_2=/tmp/spdk_tgt_config.json.fSi 00:09:03.604 + ret=0 00:09:03.604 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:03.604 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:03.604 + diff -u /tmp/62.9RM /tmp/spdk_tgt_config.json.fSi 00:09:03.604 + echo 'INFO: JSON config files are the same' 00:09:03.604 INFO: JSON config files are the same 00:09:03.604 + rm /tmp/62.9RM /tmp/spdk_tgt_config.json.fSi 00:09:03.604 + exit 0 00:09:03.604 13:40:35 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:09:03.604 13:40:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:03.604 INFO: changing configuration and checking if this can be detected... 00:09:03.604 13:40:35 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:03.604 13:40:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:03.861 13:40:35 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:03.861 13:40:35 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:09:03.861 13:40:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:03.861 + '[' 2 -ne 2 ']' 00:09:03.861 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:03.861 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:03.861 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:03.861 +++ basename /dev/fd/62 00:09:03.861 ++ mktemp /tmp/62.XXX 00:09:03.861 + tmp_file_1=/tmp/62.msu 00:09:03.861 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:03.861 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:03.861 + tmp_file_2=/tmp/spdk_tgt_config.json.BYz 00:09:03.861 + ret=0 00:09:03.861 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:04.426 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:04.426 + diff -u /tmp/62.msu /tmp/spdk_tgt_config.json.BYz 00:09:04.426 + ret=1 00:09:04.426 + echo '=== Start of file: /tmp/62.msu ===' 00:09:04.426 + cat /tmp/62.msu 00:09:04.426 + echo '=== End of file: /tmp/62.msu ===' 00:09:04.426 + echo '' 00:09:04.426 + echo '=== Start of file: /tmp/spdk_tgt_config.json.BYz ===' 00:09:04.426 + cat /tmp/spdk_tgt_config.json.BYz 00:09:04.426 + echo '=== End of file: /tmp/spdk_tgt_config.json.BYz ===' 00:09:04.426 + echo '' 00:09:04.426 + rm /tmp/62.msu /tmp/spdk_tgt_config.json.BYz 00:09:04.426 + exit 1 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:09:04.426 INFO: configuration change detected. 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:09:04.426 13:40:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.426 13:40:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@324 -- # [[ -n 2119539 ]] 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:09:04.426 13:40:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.426 13:40:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@200 -- # uname -s 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:09:04.426 13:40:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:04.426 13:40:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:04.426 13:40:35 json_config -- json_config/json_config.sh@330 -- # killprocess 2119539 00:09:04.426 13:40:35 json_config -- common/autotest_common.sh@954 -- # '[' -z 2119539 ']' 00:09:04.426 13:40:35 json_config -- common/autotest_common.sh@958 -- # kill -0 2119539 00:09:04.426 13:40:35 json_config -- common/autotest_common.sh@959 -- # uname 00:09:04.426 13:40:35 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.426 13:40:35 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2119539 00:09:04.426 13:40:35 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.426 13:40:35 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.426 13:40:35 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2119539' 00:09:04.426 killing process with pid 2119539 00:09:04.427 13:40:35 json_config -- common/autotest_common.sh@973 -- # kill 2119539 00:09:04.427 13:40:35 json_config -- common/autotest_common.sh@978 -- # wait 2119539 00:09:06.320 13:40:37 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:06.320 13:40:37 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:09:06.320 13:40:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:06.320 13:40:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:06.320 13:40:37 json_config -- json_config/json_config.sh@335 -- # return 0 00:09:06.320 13:40:37 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:09:06.320 INFO: Success 00:09:06.320 00:09:06.320 real 0m16.278s 00:09:06.320 user 0m17.926s 00:09:06.320 sys 0m2.647s 00:09:06.320 13:40:37 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.320 13:40:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:06.320 ************************************ 00:09:06.320 END TEST json_config 00:09:06.320 ************************************ 00:09:06.320 13:40:37 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:06.320 13:40:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.320 13:40:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.320 13:40:37 -- common/autotest_common.sh@10 -- # set +x 00:09:06.320 ************************************ 00:09:06.320 START TEST json_config_extra_key 00:09:06.320 ************************************ 00:09:06.320 13:40:37 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:06.321 13:40:37 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:06.321 13:40:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:09:06.321 13:40:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:06.321 13:40:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:06.321 13:40:37 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.321 13:40:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:06.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.321 --rc genhtml_branch_coverage=1 00:09:06.321 --rc genhtml_function_coverage=1 00:09:06.321 --rc genhtml_legend=1 00:09:06.321 --rc geninfo_all_blocks=1 00:09:06.321 --rc geninfo_unexecuted_blocks=1 00:09:06.321 00:09:06.321 ' 00:09:06.321 13:40:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:06.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.321 --rc genhtml_branch_coverage=1 00:09:06.321 --rc genhtml_function_coverage=1 00:09:06.321 --rc genhtml_legend=1 00:09:06.321 --rc geninfo_all_blocks=1 00:09:06.321 --rc geninfo_unexecuted_blocks=1 00:09:06.321 00:09:06.321 ' 00:09:06.321 13:40:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:06.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.321 --rc genhtml_branch_coverage=1 00:09:06.321 --rc genhtml_function_coverage=1 00:09:06.321 --rc genhtml_legend=1 00:09:06.321 --rc geninfo_all_blocks=1 00:09:06.321 --rc geninfo_unexecuted_blocks=1 00:09:06.321 00:09:06.321 ' 00:09:06.321 13:40:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:06.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.321 --rc genhtml_branch_coverage=1 00:09:06.321 --rc genhtml_function_coverage=1 00:09:06.321 --rc genhtml_legend=1 00:09:06.321 --rc geninfo_all_blocks=1 00:09:06.321 --rc geninfo_unexecuted_blocks=1 00:09:06.321 00:09:06.321 ' 00:09:06.321 13:40:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.321 13:40:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.321 13:40:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.321 13:40:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.321 13:40:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.321 13:40:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:06.321 13:40:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.321 13:40:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.321 13:40:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:06.321 13:40:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:06.321 13:40:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:06.321 13:40:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:06.321 13:40:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:06.321 13:40:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:06.321 13:40:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:06.321 13:40:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:09:06.321 13:40:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:06.321 13:40:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:06.321 13:40:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:06.321 INFO: launching applications... 00:09:06.321 13:40:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:06.321 13:40:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:06.321 13:40:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:06.321 13:40:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:06.321 13:40:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:06.321 13:40:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:06.322 13:40:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:06.322 13:40:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:06.322 13:40:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2120463 00:09:06.322 13:40:37 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:06.322 13:40:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:06.322 Waiting for target to run... 00:09:06.322 13:40:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2120463 /var/tmp/spdk_tgt.sock 00:09:06.322 13:40:37 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2120463 ']' 00:09:06.322 13:40:37 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:06.322 13:40:37 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.322 13:40:37 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:06.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:06.322 13:40:37 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.322 13:40:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 [2024-12-05 13:40:37.658209] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:06.322 [2024-12-05 13:40:37.658306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120463 ] 00:09:06.579 [2024-12-05 13:40:38.003962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.579 [2024-12-05 13:40:38.043074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.144 13:40:38 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.144 13:40:38 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:07.144 13:40:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:07.144 00:09:07.144 13:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:07.144 INFO: shutting down applications... 00:09:07.144 13:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:07.144 13:40:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:07.144 13:40:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:07.144 13:40:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2120463 ]] 00:09:07.144 13:40:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2120463 00:09:07.144 13:40:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:07.144 13:40:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:07.144 13:40:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2120463 00:09:07.144 13:40:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:07.708 13:40:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:07.708 13:40:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:07.708 13:40:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2120463 00:09:07.708 13:40:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:07.708 13:40:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:07.708 13:40:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:07.708 13:40:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:07.708 SPDK target shutdown done 00:09:07.708 13:40:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:07.708 Success 00:09:07.708 00:09:07.708 real 0m1.710s 00:09:07.708 user 0m1.729s 00:09:07.708 sys 0m0.450s 00:09:07.708 13:40:39 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.708 13:40:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:07.708 ************************************ 00:09:07.708 END TEST json_config_extra_key 00:09:07.708 ************************************ 00:09:07.708 13:40:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:07.708 13:40:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.708 13:40:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.708 13:40:39 -- common/autotest_common.sh@10 -- # set +x 00:09:07.708 ************************************ 00:09:07.708 START TEST alias_rpc 00:09:07.708 ************************************ 00:09:07.708 13:40:39 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:07.965 * Looking for test storage... 00:09:07.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.965 13:40:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:07.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.965 --rc genhtml_branch_coverage=1 00:09:07.965 --rc genhtml_function_coverage=1 00:09:07.965 --rc genhtml_legend=1 00:09:07.965 --rc geninfo_all_blocks=1 00:09:07.965 --rc geninfo_unexecuted_blocks=1 00:09:07.965 00:09:07.965 ' 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:07.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.965 --rc genhtml_branch_coverage=1 00:09:07.965 --rc genhtml_function_coverage=1 00:09:07.965 --rc genhtml_legend=1 00:09:07.965 --rc geninfo_all_blocks=1 00:09:07.965 --rc geninfo_unexecuted_blocks=1 00:09:07.965 00:09:07.965 ' 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:07.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.965 --rc genhtml_branch_coverage=1 00:09:07.965 --rc genhtml_function_coverage=1 00:09:07.965 --rc genhtml_legend=1 00:09:07.965 --rc geninfo_all_blocks=1 00:09:07.965 --rc geninfo_unexecuted_blocks=1 00:09:07.965 00:09:07.965 ' 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:07.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.965 --rc genhtml_branch_coverage=1 00:09:07.965 --rc genhtml_function_coverage=1 00:09:07.965 --rc genhtml_legend=1 00:09:07.965 --rc geninfo_all_blocks=1 00:09:07.965 --rc geninfo_unexecuted_blocks=1 00:09:07.965 00:09:07.965 ' 00:09:07.965 13:40:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:07.965 13:40:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2120777 00:09:07.965 13:40:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:07.965 13:40:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2120777 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2120777 ']' 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.965 13:40:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.965 [2024-12-05 13:40:39.412033] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:07.965 [2024-12-05 13:40:39.412125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120777 ] 00:09:07.965 [2024-12-05 13:40:39.478212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.223 [2024-12-05 13:40:39.533655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.481 13:40:39 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.481 13:40:39 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:08.481 13:40:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:08.738 13:40:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2120777 00:09:08.738 13:40:40 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2120777 ']' 00:09:08.738 13:40:40 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2120777 00:09:08.738 13:40:40 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:08.738 13:40:40 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.738 13:40:40 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2120777 00:09:08.738 13:40:40 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.738 13:40:40 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.738 13:40:40 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2120777' 00:09:08.738 killing process with pid 2120777 00:09:08.738 13:40:40 alias_rpc -- common/autotest_common.sh@973 -- # kill 2120777 00:09:08.738 13:40:40 alias_rpc -- common/autotest_common.sh@978 -- # wait 2120777 00:09:09.304 00:09:09.304 real 0m1.325s 00:09:09.304 user 0m1.457s 00:09:09.304 sys 0m0.426s 00:09:09.304 13:40:40 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.304 13:40:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.304 ************************************ 00:09:09.304 END TEST alias_rpc 00:09:09.304 ************************************ 00:09:09.304 13:40:40 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:09.304 13:40:40 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:09.304 13:40:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.304 13:40:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.304 13:40:40 -- common/autotest_common.sh@10 -- # set +x 00:09:09.304 ************************************ 00:09:09.304 START TEST spdkcli_tcp 00:09:09.304 ************************************ 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:09.305 * Looking for test storage... 00:09:09.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.305 13:40:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:09.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.305 --rc genhtml_branch_coverage=1 00:09:09.305 --rc genhtml_function_coverage=1 00:09:09.305 --rc genhtml_legend=1 00:09:09.305 --rc geninfo_all_blocks=1 00:09:09.305 --rc geninfo_unexecuted_blocks=1 00:09:09.305 00:09:09.305 ' 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:09.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.305 --rc genhtml_branch_coverage=1 00:09:09.305 --rc genhtml_function_coverage=1 00:09:09.305 --rc genhtml_legend=1 00:09:09.305 --rc geninfo_all_blocks=1 00:09:09.305 --rc geninfo_unexecuted_blocks=1 00:09:09.305 00:09:09.305 ' 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:09.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.305 --rc genhtml_branch_coverage=1 00:09:09.305 --rc genhtml_function_coverage=1 00:09:09.305 --rc genhtml_legend=1 00:09:09.305 --rc geninfo_all_blocks=1 00:09:09.305 --rc geninfo_unexecuted_blocks=1 00:09:09.305 00:09:09.305 ' 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:09.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.305 --rc genhtml_branch_coverage=1 00:09:09.305 --rc genhtml_function_coverage=1 00:09:09.305 --rc genhtml_legend=1 00:09:09.305 --rc geninfo_all_blocks=1 00:09:09.305 --rc geninfo_unexecuted_blocks=1 00:09:09.305 00:09:09.305 ' 00:09:09.305 13:40:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:09:09.305 13:40:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:09.305 13:40:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:09:09.305 13:40:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:09.305 13:40:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:09.305 13:40:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:09.305 13:40:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.305 13:40:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2120979 00:09:09.305 13:40:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:09.305 13:40:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2120979 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2120979 ']' 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.305 13:40:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.305 [2024-12-05 13:40:40.780849] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:09.305 [2024-12-05 13:40:40.780956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120979 ] 00:09:09.596 [2024-12-05 13:40:40.847035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:09.596 [2024-12-05 13:40:40.902406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.596 [2024-12-05 13:40:40.902410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.854 13:40:41 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.854 13:40:41 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:09.854 13:40:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2120987 00:09:09.854 13:40:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:09.854 13:40:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:10.113 [ 00:09:10.113 "bdev_malloc_delete", 00:09:10.113 "bdev_malloc_create", 00:09:10.113 "bdev_null_resize", 00:09:10.113 "bdev_null_delete", 00:09:10.113 "bdev_null_create", 00:09:10.113 "bdev_nvme_cuse_unregister", 00:09:10.113 "bdev_nvme_cuse_register", 00:09:10.113 "bdev_opal_new_user", 00:09:10.113 "bdev_opal_set_lock_state", 00:09:10.113 "bdev_opal_delete", 00:09:10.113 "bdev_opal_get_info", 00:09:10.113 "bdev_opal_create", 00:09:10.113 "bdev_nvme_opal_revert", 00:09:10.113 "bdev_nvme_opal_init", 00:09:10.113 "bdev_nvme_send_cmd", 00:09:10.113 "bdev_nvme_set_keys", 00:09:10.113 "bdev_nvme_get_path_iostat", 00:09:10.113 "bdev_nvme_get_mdns_discovery_info", 00:09:10.113 "bdev_nvme_stop_mdns_discovery", 00:09:10.113 "bdev_nvme_start_mdns_discovery", 00:09:10.113 "bdev_nvme_set_multipath_policy", 00:09:10.113 "bdev_nvme_set_preferred_path", 00:09:10.113 "bdev_nvme_get_io_paths", 00:09:10.113 "bdev_nvme_remove_error_injection", 00:09:10.113 "bdev_nvme_add_error_injection", 00:09:10.113 "bdev_nvme_get_discovery_info", 00:09:10.113 "bdev_nvme_stop_discovery", 00:09:10.113 "bdev_nvme_start_discovery", 00:09:10.113 "bdev_nvme_get_controller_health_info", 00:09:10.113 "bdev_nvme_disable_controller", 00:09:10.113 "bdev_nvme_enable_controller", 00:09:10.113 "bdev_nvme_reset_controller", 00:09:10.113 "bdev_nvme_get_transport_statistics", 00:09:10.113 "bdev_nvme_apply_firmware", 00:09:10.113 "bdev_nvme_detach_controller", 00:09:10.113 "bdev_nvme_get_controllers", 00:09:10.113 "bdev_nvme_attach_controller", 00:09:10.113 "bdev_nvme_set_hotplug", 00:09:10.113 "bdev_nvme_set_options", 00:09:10.113 "bdev_passthru_delete", 00:09:10.113 "bdev_passthru_create", 00:09:10.113 "bdev_lvol_set_parent_bdev", 00:09:10.113 "bdev_lvol_set_parent", 00:09:10.113 "bdev_lvol_check_shallow_copy", 00:09:10.113 "bdev_lvol_start_shallow_copy", 00:09:10.113 "bdev_lvol_grow_lvstore", 00:09:10.113 "bdev_lvol_get_lvols", 00:09:10.113 "bdev_lvol_get_lvstores", 00:09:10.113 "bdev_lvol_delete", 00:09:10.113 "bdev_lvol_set_read_only", 00:09:10.113 "bdev_lvol_resize", 00:09:10.113 "bdev_lvol_decouple_parent", 00:09:10.113 "bdev_lvol_inflate", 00:09:10.113 "bdev_lvol_rename", 00:09:10.113 "bdev_lvol_clone_bdev", 00:09:10.113 "bdev_lvol_clone", 00:09:10.113 "bdev_lvol_snapshot", 00:09:10.113 "bdev_lvol_create", 00:09:10.113 "bdev_lvol_delete_lvstore", 00:09:10.113 "bdev_lvol_rename_lvstore", 00:09:10.113 "bdev_lvol_create_lvstore", 00:09:10.113 "bdev_raid_set_options", 00:09:10.113 "bdev_raid_remove_base_bdev", 00:09:10.113 "bdev_raid_add_base_bdev", 00:09:10.113 "bdev_raid_delete", 00:09:10.113 "bdev_raid_create", 00:09:10.113 "bdev_raid_get_bdevs", 00:09:10.113 "bdev_error_inject_error", 00:09:10.113 "bdev_error_delete", 00:09:10.113 "bdev_error_create", 00:09:10.113 "bdev_split_delete", 00:09:10.113 "bdev_split_create", 00:09:10.113 "bdev_delay_delete", 00:09:10.113 "bdev_delay_create", 00:09:10.113 "bdev_delay_update_latency", 00:09:10.113 "bdev_zone_block_delete", 00:09:10.113 "bdev_zone_block_create", 00:09:10.113 "blobfs_create", 00:09:10.113 "blobfs_detect", 00:09:10.113 "blobfs_set_cache_size", 00:09:10.113 "bdev_aio_delete", 00:09:10.113 "bdev_aio_rescan", 00:09:10.113 "bdev_aio_create", 00:09:10.113 "bdev_ftl_set_property", 00:09:10.113 "bdev_ftl_get_properties", 00:09:10.113 "bdev_ftl_get_stats", 00:09:10.113 "bdev_ftl_unmap", 00:09:10.113 "bdev_ftl_unload", 00:09:10.113 "bdev_ftl_delete", 00:09:10.113 "bdev_ftl_load", 00:09:10.113 "bdev_ftl_create", 00:09:10.113 "bdev_virtio_attach_controller", 00:09:10.113 "bdev_virtio_scsi_get_devices", 00:09:10.113 "bdev_virtio_detach_controller", 00:09:10.113 "bdev_virtio_blk_set_hotplug", 00:09:10.113 "bdev_iscsi_delete", 00:09:10.113 "bdev_iscsi_create", 00:09:10.113 "bdev_iscsi_set_options", 00:09:10.113 "accel_error_inject_error", 00:09:10.113 "ioat_scan_accel_module", 00:09:10.113 "dsa_scan_accel_module", 00:09:10.113 "iaa_scan_accel_module", 00:09:10.113 "vfu_virtio_create_fs_endpoint", 00:09:10.113 "vfu_virtio_create_scsi_endpoint", 00:09:10.113 "vfu_virtio_scsi_remove_target", 00:09:10.113 "vfu_virtio_scsi_add_target", 00:09:10.113 "vfu_virtio_create_blk_endpoint", 00:09:10.113 "vfu_virtio_delete_endpoint", 00:09:10.113 "keyring_file_remove_key", 00:09:10.113 "keyring_file_add_key", 00:09:10.113 "keyring_linux_set_options", 00:09:10.113 "fsdev_aio_delete", 00:09:10.113 "fsdev_aio_create", 00:09:10.113 "iscsi_get_histogram", 00:09:10.113 "iscsi_enable_histogram", 00:09:10.113 "iscsi_set_options", 00:09:10.113 "iscsi_get_auth_groups", 00:09:10.113 "iscsi_auth_group_remove_secret", 00:09:10.113 "iscsi_auth_group_add_secret", 00:09:10.113 "iscsi_delete_auth_group", 00:09:10.113 "iscsi_create_auth_group", 00:09:10.113 "iscsi_set_discovery_auth", 00:09:10.113 "iscsi_get_options", 00:09:10.113 "iscsi_target_node_request_logout", 00:09:10.113 "iscsi_target_node_set_redirect", 00:09:10.113 "iscsi_target_node_set_auth", 00:09:10.113 "iscsi_target_node_add_lun", 00:09:10.113 "iscsi_get_stats", 00:09:10.113 "iscsi_get_connections", 00:09:10.113 "iscsi_portal_group_set_auth", 00:09:10.113 "iscsi_start_portal_group", 00:09:10.113 "iscsi_delete_portal_group", 00:09:10.113 "iscsi_create_portal_group", 00:09:10.113 "iscsi_get_portal_groups", 00:09:10.113 "iscsi_delete_target_node", 00:09:10.113 "iscsi_target_node_remove_pg_ig_maps", 00:09:10.113 "iscsi_target_node_add_pg_ig_maps", 00:09:10.113 "iscsi_create_target_node", 00:09:10.113 "iscsi_get_target_nodes", 00:09:10.113 "iscsi_delete_initiator_group", 00:09:10.113 "iscsi_initiator_group_remove_initiators", 00:09:10.113 "iscsi_initiator_group_add_initiators", 00:09:10.113 "iscsi_create_initiator_group", 00:09:10.113 "iscsi_get_initiator_groups", 00:09:10.113 "nvmf_set_crdt", 00:09:10.113 "nvmf_set_config", 00:09:10.113 "nvmf_set_max_subsystems", 00:09:10.113 "nvmf_stop_mdns_prr", 00:09:10.113 "nvmf_publish_mdns_prr", 00:09:10.113 "nvmf_subsystem_get_listeners", 00:09:10.113 "nvmf_subsystem_get_qpairs", 00:09:10.113 "nvmf_subsystem_get_controllers", 00:09:10.113 "nvmf_get_stats", 00:09:10.113 "nvmf_get_transports", 00:09:10.113 "nvmf_create_transport", 00:09:10.113 "nvmf_get_targets", 00:09:10.113 "nvmf_delete_target", 00:09:10.113 "nvmf_create_target", 00:09:10.113 "nvmf_subsystem_allow_any_host", 00:09:10.113 "nvmf_subsystem_set_keys", 00:09:10.113 "nvmf_subsystem_remove_host", 00:09:10.113 "nvmf_subsystem_add_host", 00:09:10.113 "nvmf_ns_remove_host", 00:09:10.113 "nvmf_ns_add_host", 00:09:10.113 "nvmf_subsystem_remove_ns", 00:09:10.113 "nvmf_subsystem_set_ns_ana_group", 00:09:10.113 "nvmf_subsystem_add_ns", 00:09:10.113 "nvmf_subsystem_listener_set_ana_state", 00:09:10.113 "nvmf_discovery_get_referrals", 00:09:10.113 "nvmf_discovery_remove_referral", 00:09:10.113 "nvmf_discovery_add_referral", 00:09:10.113 "nvmf_subsystem_remove_listener", 00:09:10.113 "nvmf_subsystem_add_listener", 00:09:10.113 "nvmf_delete_subsystem", 00:09:10.113 "nvmf_create_subsystem", 00:09:10.113 "nvmf_get_subsystems", 00:09:10.113 "env_dpdk_get_mem_stats", 00:09:10.113 "nbd_get_disks", 00:09:10.113 "nbd_stop_disk", 00:09:10.113 "nbd_start_disk", 00:09:10.113 "ublk_recover_disk", 00:09:10.113 "ublk_get_disks", 00:09:10.113 "ublk_stop_disk", 00:09:10.113 "ublk_start_disk", 00:09:10.113 "ublk_destroy_target", 00:09:10.113 "ublk_create_target", 00:09:10.113 "virtio_blk_create_transport", 00:09:10.113 "virtio_blk_get_transports", 00:09:10.113 "vhost_controller_set_coalescing", 00:09:10.113 "vhost_get_controllers", 00:09:10.113 "vhost_delete_controller", 00:09:10.113 "vhost_create_blk_controller", 00:09:10.113 "vhost_scsi_controller_remove_target", 00:09:10.113 "vhost_scsi_controller_add_target", 00:09:10.113 "vhost_start_scsi_controller", 00:09:10.113 "vhost_create_scsi_controller", 00:09:10.113 "thread_set_cpumask", 00:09:10.113 "scheduler_set_options", 00:09:10.113 "framework_get_governor", 00:09:10.113 "framework_get_scheduler", 00:09:10.113 "framework_set_scheduler", 00:09:10.113 "framework_get_reactors", 00:09:10.113 "thread_get_io_channels", 00:09:10.114 "thread_get_pollers", 00:09:10.114 "thread_get_stats", 00:09:10.114 "framework_monitor_context_switch", 00:09:10.114 "spdk_kill_instance", 00:09:10.114 "log_enable_timestamps", 00:09:10.114 "log_get_flags", 00:09:10.114 "log_clear_flag", 00:09:10.114 "log_set_flag", 00:09:10.114 "log_get_level", 00:09:10.114 "log_set_level", 00:09:10.114 "log_get_print_level", 00:09:10.114 "log_set_print_level", 00:09:10.114 "framework_enable_cpumask_locks", 00:09:10.114 "framework_disable_cpumask_locks", 00:09:10.114 "framework_wait_init", 00:09:10.114 "framework_start_init", 00:09:10.114 "scsi_get_devices", 00:09:10.114 "bdev_get_histogram", 00:09:10.114 "bdev_enable_histogram", 00:09:10.114 "bdev_set_qos_limit", 00:09:10.114 "bdev_set_qd_sampling_period", 00:09:10.114 "bdev_get_bdevs", 00:09:10.114 "bdev_reset_iostat", 00:09:10.114 "bdev_get_iostat", 00:09:10.114 "bdev_examine", 00:09:10.114 "bdev_wait_for_examine", 00:09:10.114 "bdev_set_options", 00:09:10.114 "accel_get_stats", 00:09:10.114 "accel_set_options", 00:09:10.114 "accel_set_driver", 00:09:10.114 "accel_crypto_key_destroy", 00:09:10.114 "accel_crypto_keys_get", 00:09:10.114 "accel_crypto_key_create", 00:09:10.114 "accel_assign_opc", 00:09:10.114 "accel_get_module_info", 00:09:10.114 "accel_get_opc_assignments", 00:09:10.114 "vmd_rescan", 00:09:10.114 "vmd_remove_device", 00:09:10.114 "vmd_enable", 00:09:10.114 "sock_get_default_impl", 00:09:10.114 "sock_set_default_impl", 00:09:10.114 "sock_impl_set_options", 00:09:10.114 "sock_impl_get_options", 00:09:10.114 "iobuf_get_stats", 00:09:10.114 "iobuf_set_options", 00:09:10.114 "keyring_get_keys", 00:09:10.114 "vfu_tgt_set_base_path", 00:09:10.114 "framework_get_pci_devices", 00:09:10.114 "framework_get_config", 00:09:10.114 "framework_get_subsystems", 00:09:10.114 "fsdev_set_opts", 00:09:10.114 "fsdev_get_opts", 00:09:10.114 "trace_get_info", 00:09:10.114 "trace_get_tpoint_group_mask", 00:09:10.114 "trace_disable_tpoint_group", 00:09:10.114 "trace_enable_tpoint_group", 00:09:10.114 "trace_clear_tpoint_mask", 00:09:10.114 "trace_set_tpoint_mask", 00:09:10.114 "notify_get_notifications", 00:09:10.114 "notify_get_types", 00:09:10.114 "spdk_get_version", 00:09:10.114 "rpc_get_methods" 00:09:10.114 ] 00:09:10.114 13:40:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:10.114 13:40:41 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:10.114 13:40:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.114 13:40:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:10.114 13:40:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2120979 00:09:10.114 13:40:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2120979 ']' 00:09:10.114 13:40:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2120979 00:09:10.114 13:40:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:10.114 13:40:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.114 13:40:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2120979 00:09:10.114 13:40:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.114 13:40:41 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.114 13:40:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2120979' 00:09:10.114 killing process with pid 2120979 00:09:10.114 13:40:41 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2120979 00:09:10.114 13:40:41 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2120979 00:09:10.679 00:09:10.679 real 0m1.338s 00:09:10.679 user 0m2.403s 00:09:10.679 sys 0m0.460s 00:09:10.679 13:40:41 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.679 13:40:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.679 ************************************ 00:09:10.679 END TEST spdkcli_tcp 00:09:10.679 ************************************ 00:09:10.679 13:40:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:10.679 13:40:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.679 13:40:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.679 13:40:41 -- common/autotest_common.sh@10 -- # set +x 00:09:10.679 ************************************ 00:09:10.679 START TEST dpdk_mem_utility 00:09:10.679 ************************************ 00:09:10.679 13:40:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:10.679 * Looking for test storage... 00:09:10.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:09:10.679 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:10.679 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:09:10.679 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:10.679 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.679 13:40:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:10.680 13:40:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.680 13:40:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.680 13:40:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.680 13:40:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:10.680 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.680 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:10.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.680 --rc genhtml_branch_coverage=1 00:09:10.680 --rc genhtml_function_coverage=1 00:09:10.680 --rc genhtml_legend=1 00:09:10.680 --rc geninfo_all_blocks=1 00:09:10.680 --rc geninfo_unexecuted_blocks=1 00:09:10.680 00:09:10.680 ' 00:09:10.680 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:10.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.680 --rc genhtml_branch_coverage=1 00:09:10.680 --rc genhtml_function_coverage=1 00:09:10.680 --rc genhtml_legend=1 00:09:10.680 --rc geninfo_all_blocks=1 00:09:10.680 --rc geninfo_unexecuted_blocks=1 00:09:10.680 00:09:10.680 ' 00:09:10.680 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:10.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.680 --rc genhtml_branch_coverage=1 00:09:10.680 --rc genhtml_function_coverage=1 00:09:10.680 --rc genhtml_legend=1 00:09:10.680 --rc geninfo_all_blocks=1 00:09:10.680 --rc geninfo_unexecuted_blocks=1 00:09:10.680 00:09:10.680 ' 00:09:10.680 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:10.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.680 --rc genhtml_branch_coverage=1 00:09:10.680 --rc genhtml_function_coverage=1 00:09:10.680 --rc genhtml_legend=1 00:09:10.680 --rc geninfo_all_blocks=1 00:09:10.680 --rc geninfo_unexecuted_blocks=1 00:09:10.680 00:09:10.680 ' 00:09:10.680 13:40:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:10.680 13:40:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2121187 00:09:10.680 13:40:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:10.680 13:40:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2121187 00:09:10.680 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2121187 ']' 00:09:10.680 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.680 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.680 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.680 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.680 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:10.680 [2024-12-05 13:40:42.181008] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:10.680 [2024-12-05 13:40:42.181113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121187 ] 00:09:10.938 [2024-12-05 13:40:42.246121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.938 [2024-12-05 13:40:42.301729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.197 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.197 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:11.197 13:40:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:11.197 13:40:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:11.197 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.197 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:11.197 { 00:09:11.197 "filename": "/tmp/spdk_mem_dump.txt" 00:09:11.197 } 00:09:11.197 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.197 13:40:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:11.197 DPDK memory size 818.000000 MiB in 1 heap(s) 00:09:11.197 1 heaps totaling size 818.000000 MiB 00:09:11.197 size: 818.000000 MiB heap id: 0 00:09:11.197 end heaps---------- 00:09:11.197 9 mempools totaling size 603.782043 MiB 00:09:11.197 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:11.197 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:11.197 size: 100.555481 MiB name: bdev_io_2121187 00:09:11.197 size: 50.003479 MiB name: msgpool_2121187 00:09:11.197 size: 36.509338 MiB name: fsdev_io_2121187 00:09:11.197 size: 21.763794 MiB name: PDU_Pool 00:09:11.197 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:11.197 size: 4.133484 MiB name: evtpool_2121187 00:09:11.197 size: 0.026123 MiB name: Session_Pool 00:09:11.197 end mempools------- 00:09:11.197 6 memzones totaling size 4.142822 MiB 00:09:11.197 size: 1.000366 MiB name: RG_ring_0_2121187 00:09:11.197 size: 1.000366 MiB name: RG_ring_1_2121187 00:09:11.197 size: 1.000366 MiB name: RG_ring_4_2121187 00:09:11.197 size: 1.000366 MiB name: RG_ring_5_2121187 00:09:11.197 size: 0.125366 MiB name: RG_ring_2_2121187 00:09:11.197 size: 0.015991 MiB name: RG_ring_3_2121187 00:09:11.197 end memzones------- 00:09:11.197 13:40:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:11.197 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:09:11.197 list of free elements. size: 10.852478 MiB 00:09:11.197 element at address: 0x200019200000 with size: 0.999878 MiB 00:09:11.197 element at address: 0x200019400000 with size: 0.999878 MiB 00:09:11.197 element at address: 0x200000400000 with size: 0.998535 MiB 00:09:11.197 element at address: 0x200032000000 with size: 0.994446 MiB 00:09:11.197 element at address: 0x200006400000 with size: 0.959839 MiB 00:09:11.197 element at address: 0x200012c00000 with size: 0.944275 MiB 00:09:11.197 element at address: 0x200019600000 with size: 0.936584 MiB 00:09:11.197 element at address: 0x200000200000 with size: 0.717346 MiB 00:09:11.197 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:09:11.197 element at address: 0x200000c00000 with size: 0.495422 MiB 00:09:11.197 element at address: 0x20000a600000 with size: 0.490723 MiB 00:09:11.197 element at address: 0x200019800000 with size: 0.485657 MiB 00:09:11.197 element at address: 0x200003e00000 with size: 0.481934 MiB 00:09:11.197 element at address: 0x200028200000 with size: 0.410034 MiB 00:09:11.197 element at address: 0x200000800000 with size: 0.355042 MiB 00:09:11.197 list of standard malloc elements. size: 199.218628 MiB 00:09:11.197 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:09:11.197 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:09:11.197 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:11.197 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:09:11.197 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:09:11.197 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:11.197 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:09:11.197 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:11.197 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:09:11.197 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:11.197 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:11.197 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:09:11.197 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:09:11.197 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:09:11.197 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:09:11.197 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:09:11.197 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:09:11.197 element at address: 0x20000085b040 with size: 0.000183 MiB 00:09:11.197 element at address: 0x20000085f300 with size: 0.000183 MiB 00:09:11.197 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:09:11.197 element at address: 0x20000087f680 with size: 0.000183 MiB 00:09:11.197 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:09:11.197 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:09:11.197 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:09:11.197 element at address: 0x200000cff000 with size: 0.000183 MiB 00:09:11.197 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:09:11.197 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:09:11.197 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:09:11.197 element at address: 0x200003efb980 with size: 0.000183 MiB 00:09:11.197 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:09:11.197 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:09:11.197 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:09:11.197 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:09:11.197 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:09:11.197 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:09:11.197 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:09:11.197 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:09:11.197 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:09:11.197 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:09:11.197 element at address: 0x200028268f80 with size: 0.000183 MiB 00:09:11.197 element at address: 0x200028269040 with size: 0.000183 MiB 00:09:11.197 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:09:11.197 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:09:11.197 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:09:11.197 list of memzone associated elements. size: 607.928894 MiB 00:09:11.197 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:09:11.197 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:11.197 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:09:11.197 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:11.198 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:09:11.198 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2121187_0 00:09:11.198 element at address: 0x200000dff380 with size: 48.003052 MiB 00:09:11.198 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2121187_0 00:09:11.198 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:09:11.198 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2121187_0 00:09:11.198 element at address: 0x2000199be940 with size: 20.255554 MiB 00:09:11.198 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:11.198 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:09:11.198 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:11.198 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:09:11.198 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2121187_0 00:09:11.198 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:09:11.198 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2121187 00:09:11.198 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:11.198 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2121187 00:09:11.198 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:09:11.198 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:11.198 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:09:11.198 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:11.198 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:09:11.198 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:11.198 element at address: 0x200003efba40 with size: 1.008118 MiB 00:09:11.198 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:11.198 element at address: 0x200000cff180 with size: 1.000488 MiB 00:09:11.198 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2121187 00:09:11.198 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:09:11.198 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2121187 00:09:11.198 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:09:11.198 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2121187 00:09:11.198 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:09:11.198 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2121187 00:09:11.198 element at address: 0x20000087f740 with size: 0.500488 MiB 00:09:11.198 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2121187 00:09:11.198 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:09:11.198 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2121187 00:09:11.198 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:09:11.198 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:11.198 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:09:11.198 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:11.198 element at address: 0x20001987c540 with size: 0.250488 MiB 00:09:11.198 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:11.198 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:09:11.198 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2121187 00:09:11.198 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:09:11.198 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2121187 00:09:11.198 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:09:11.198 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:11.198 element at address: 0x200028269100 with size: 0.023743 MiB 00:09:11.198 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:11.198 element at address: 0x20000085b100 with size: 0.016113 MiB 00:09:11.198 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2121187 00:09:11.198 element at address: 0x20002826f240 with size: 0.002441 MiB 00:09:11.198 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:11.198 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:09:11.198 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2121187 00:09:11.198 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:09:11.198 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2121187 00:09:11.198 element at address: 0x20000085af00 with size: 0.000305 MiB 00:09:11.198 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2121187 00:09:11.198 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:09:11.198 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:11.198 13:40:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:11.198 13:40:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2121187 00:09:11.198 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2121187 ']' 00:09:11.198 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2121187 00:09:11.198 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:11.198 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.198 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2121187 00:09:11.198 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.198 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.198 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2121187' 00:09:11.198 killing process with pid 2121187 00:09:11.198 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2121187 00:09:11.198 13:40:42 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2121187 00:09:11.764 00:09:11.764 real 0m1.143s 00:09:11.764 user 0m1.132s 00:09:11.764 sys 0m0.411s 00:09:11.764 13:40:43 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.764 13:40:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:11.764 ************************************ 00:09:11.764 END TEST dpdk_mem_utility 00:09:11.764 ************************************ 00:09:11.764 13:40:43 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:11.764 13:40:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.764 13:40:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.764 13:40:43 -- common/autotest_common.sh@10 -- # set +x 00:09:11.764 ************************************ 00:09:11.764 START TEST event 00:09:11.764 ************************************ 00:09:11.764 13:40:43 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:11.764 * Looking for test storage... 00:09:11.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:11.764 13:40:43 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:11.764 13:40:43 event -- common/autotest_common.sh@1711 -- # lcov --version 00:09:11.764 13:40:43 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:12.023 13:40:43 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:12.023 13:40:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.023 13:40:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.023 13:40:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.023 13:40:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.023 13:40:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.023 13:40:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.023 13:40:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.023 13:40:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.023 13:40:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.023 13:40:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.023 13:40:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.023 13:40:43 event -- scripts/common.sh@344 -- # case "$op" in 00:09:12.023 13:40:43 event -- scripts/common.sh@345 -- # : 1 00:09:12.023 13:40:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.023 13:40:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.023 13:40:43 event -- scripts/common.sh@365 -- # decimal 1 00:09:12.023 13:40:43 event -- scripts/common.sh@353 -- # local d=1 00:09:12.023 13:40:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.023 13:40:43 event -- scripts/common.sh@355 -- # echo 1 00:09:12.023 13:40:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.023 13:40:43 event -- scripts/common.sh@366 -- # decimal 2 00:09:12.023 13:40:43 event -- scripts/common.sh@353 -- # local d=2 00:09:12.023 13:40:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.023 13:40:43 event -- scripts/common.sh@355 -- # echo 2 00:09:12.023 13:40:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.023 13:40:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.023 13:40:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.023 13:40:43 event -- scripts/common.sh@368 -- # return 0 00:09:12.023 13:40:43 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.023 13:40:43 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:12.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.023 --rc genhtml_branch_coverage=1 00:09:12.023 --rc genhtml_function_coverage=1 00:09:12.023 --rc genhtml_legend=1 00:09:12.023 --rc geninfo_all_blocks=1 00:09:12.023 --rc geninfo_unexecuted_blocks=1 00:09:12.023 00:09:12.023 ' 00:09:12.023 13:40:43 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:12.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.023 --rc genhtml_branch_coverage=1 00:09:12.023 --rc genhtml_function_coverage=1 00:09:12.023 --rc genhtml_legend=1 00:09:12.023 --rc geninfo_all_blocks=1 00:09:12.023 --rc geninfo_unexecuted_blocks=1 00:09:12.023 00:09:12.023 ' 00:09:12.023 13:40:43 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:12.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.023 --rc genhtml_branch_coverage=1 00:09:12.023 --rc genhtml_function_coverage=1 00:09:12.023 --rc genhtml_legend=1 00:09:12.023 --rc geninfo_all_blocks=1 00:09:12.023 --rc geninfo_unexecuted_blocks=1 00:09:12.023 00:09:12.023 ' 00:09:12.023 13:40:43 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:12.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.023 --rc genhtml_branch_coverage=1 00:09:12.023 --rc genhtml_function_coverage=1 00:09:12.023 --rc genhtml_legend=1 00:09:12.023 --rc geninfo_all_blocks=1 00:09:12.023 --rc geninfo_unexecuted_blocks=1 00:09:12.023 00:09:12.023 ' 00:09:12.023 13:40:43 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:12.023 13:40:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:12.023 13:40:43 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:12.023 13:40:43 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:12.023 13:40:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.023 13:40:43 event -- common/autotest_common.sh@10 -- # set +x 00:09:12.023 ************************************ 00:09:12.023 START TEST event_perf 00:09:12.023 ************************************ 00:09:12.024 13:40:43 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:12.024 Running I/O for 1 seconds...[2024-12-05 13:40:43.363301] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:12.024 [2024-12-05 13:40:43.363365] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121389 ] 00:09:12.024 [2024-12-05 13:40:43.432152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.024 [2024-12-05 13:40:43.492723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.024 [2024-12-05 13:40:43.492781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.024 [2024-12-05 13:40:43.492848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.024 [2024-12-05 13:40:43.492851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.396 Running I/O for 1 seconds... 00:09:13.396 lcore 0: 237135 00:09:13.396 lcore 1: 237133 00:09:13.396 lcore 2: 237132 00:09:13.396 lcore 3: 237134 00:09:13.396 done. 00:09:13.396 00:09:13.396 real 0m1.207s 00:09:13.396 user 0m4.124s 00:09:13.396 sys 0m0.076s 00:09:13.396 13:40:44 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.396 13:40:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:13.396 ************************************ 00:09:13.396 END TEST event_perf 00:09:13.396 ************************************ 00:09:13.396 13:40:44 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:13.396 13:40:44 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:13.396 13:40:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.396 13:40:44 event -- common/autotest_common.sh@10 -- # set +x 00:09:13.396 ************************************ 00:09:13.396 START TEST event_reactor 00:09:13.396 ************************************ 00:09:13.396 13:40:44 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:13.396 [2024-12-05 13:40:44.619986] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:13.396 [2024-12-05 13:40:44.620053] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121551 ] 00:09:13.396 [2024-12-05 13:40:44.685463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.396 [2024-12-05 13:40:44.736127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.330 test_start 00:09:14.330 oneshot 00:09:14.330 tick 100 00:09:14.330 tick 100 00:09:14.330 tick 250 00:09:14.330 tick 100 00:09:14.330 tick 100 00:09:14.330 tick 100 00:09:14.330 tick 250 00:09:14.330 tick 500 00:09:14.330 tick 100 00:09:14.330 tick 100 00:09:14.330 tick 250 00:09:14.330 tick 100 00:09:14.330 tick 100 00:09:14.330 test_end 00:09:14.330 00:09:14.330 real 0m1.191s 00:09:14.330 user 0m1.125s 00:09:14.330 sys 0m0.062s 00:09:14.330 13:40:45 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.330 13:40:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:14.330 ************************************ 00:09:14.330 END TEST event_reactor 00:09:14.330 ************************************ 00:09:14.330 13:40:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:14.330 13:40:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:14.330 13:40:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.330 13:40:45 event -- common/autotest_common.sh@10 -- # set +x 00:09:14.330 ************************************ 00:09:14.330 START TEST event_reactor_perf 00:09:14.330 ************************************ 00:09:14.330 13:40:45 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:14.589 [2024-12-05 13:40:45.861315] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:14.589 [2024-12-05 13:40:45.861382] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121705 ] 00:09:14.589 [2024-12-05 13:40:45.927176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.589 [2024-12-05 13:40:45.978528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.523 test_start 00:09:15.523 test_end 00:09:15.523 Performance: 448222 events per second 00:09:15.523 00:09:15.523 real 0m1.191s 00:09:15.523 user 0m1.127s 00:09:15.523 sys 0m0.060s 00:09:15.523 13:40:47 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.523 13:40:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:15.523 ************************************ 00:09:15.523 END TEST event_reactor_perf 00:09:15.523 ************************************ 00:09:15.782 13:40:47 event -- event/event.sh@49 -- # uname -s 00:09:15.782 13:40:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:15.782 13:40:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:15.782 13:40:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.782 13:40:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.782 13:40:47 event -- common/autotest_common.sh@10 -- # set +x 00:09:15.782 ************************************ 00:09:15.782 START TEST event_scheduler 00:09:15.782 ************************************ 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:15.782 * Looking for test storage... 00:09:15.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.782 13:40:47 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:15.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.782 --rc genhtml_branch_coverage=1 00:09:15.782 --rc genhtml_function_coverage=1 00:09:15.782 --rc genhtml_legend=1 00:09:15.782 --rc geninfo_all_blocks=1 00:09:15.782 --rc geninfo_unexecuted_blocks=1 00:09:15.782 00:09:15.782 ' 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:15.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.782 --rc genhtml_branch_coverage=1 00:09:15.782 --rc genhtml_function_coverage=1 00:09:15.782 --rc genhtml_legend=1 00:09:15.782 --rc geninfo_all_blocks=1 00:09:15.782 --rc geninfo_unexecuted_blocks=1 00:09:15.782 00:09:15.782 ' 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:15.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.782 --rc genhtml_branch_coverage=1 00:09:15.782 --rc genhtml_function_coverage=1 00:09:15.782 --rc genhtml_legend=1 00:09:15.782 --rc geninfo_all_blocks=1 00:09:15.782 --rc geninfo_unexecuted_blocks=1 00:09:15.782 00:09:15.782 ' 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:15.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.782 --rc genhtml_branch_coverage=1 00:09:15.782 --rc genhtml_function_coverage=1 00:09:15.782 --rc genhtml_legend=1 00:09:15.782 --rc geninfo_all_blocks=1 00:09:15.782 --rc geninfo_unexecuted_blocks=1 00:09:15.782 00:09:15.782 ' 00:09:15.782 13:40:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:15.782 13:40:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2121978 00:09:15.782 13:40:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:15.782 13:40:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:15.782 13:40:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2121978 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2121978 ']' 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.782 13:40:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:15.782 [2024-12-05 13:40:47.288432] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:15.782 [2024-12-05 13:40:47.288520] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121978 ] 00:09:16.041 [2024-12-05 13:40:47.359644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.041 [2024-12-05 13:40:47.419645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.041 [2024-12-05 13:40:47.419711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.041 [2024-12-05 13:40:47.419794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.041 [2024-12-05 13:40:47.419797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.041 13:40:47 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.041 13:40:47 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:16.041 13:40:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:16.041 13:40:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.041 13:40:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:16.041 [2024-12-05 13:40:47.528789] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:16.041 [2024-12-05 13:40:47.528816] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:16.041 [2024-12-05 13:40:47.528849] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:16.041 [2024-12-05 13:40:47.528861] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:16.041 [2024-12-05 13:40:47.528871] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:16.041 13:40:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.041 13:40:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:16.041 13:40:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.041 13:40:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:16.300 [2024-12-05 13:40:47.630088] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:16.300 13:40:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.300 13:40:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:16.300 13:40:47 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.300 13:40:47 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.300 13:40:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:16.300 ************************************ 00:09:16.300 START TEST scheduler_create_thread 00:09:16.300 ************************************ 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.300 2 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.300 3 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.300 4 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.300 5 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.300 6 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.300 7 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.300 8 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.300 9 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.300 10 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.300 13:40:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.865 13:40:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.865 00:09:16.865 real 0m0.591s 00:09:16.865 user 0m0.010s 00:09:16.865 sys 0m0.004s 00:09:16.865 13:40:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.865 13:40:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.865 ************************************ 00:09:16.865 END TEST scheduler_create_thread 00:09:16.865 ************************************ 00:09:16.865 13:40:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:16.865 13:40:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2121978 00:09:16.865 13:40:48 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2121978 ']' 00:09:16.865 13:40:48 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2121978 00:09:16.865 13:40:48 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:16.865 13:40:48 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.865 13:40:48 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2121978 00:09:16.865 13:40:48 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:16.865 13:40:48 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:16.865 13:40:48 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2121978' 00:09:16.865 killing process with pid 2121978 00:09:16.865 13:40:48 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2121978 00:09:16.865 13:40:48 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2121978 00:09:17.430 [2024-12-05 13:40:48.730318] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:17.430 00:09:17.430 real 0m1.848s 00:09:17.430 user 0m2.508s 00:09:17.430 sys 0m0.362s 00:09:17.430 13:40:48 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.430 13:40:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:17.430 ************************************ 00:09:17.430 END TEST event_scheduler 00:09:17.430 ************************************ 00:09:17.688 13:40:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:17.688 13:40:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:17.688 13:40:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.688 13:40:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.688 13:40:48 event -- common/autotest_common.sh@10 -- # set +x 00:09:17.688 ************************************ 00:09:17.688 START TEST app_repeat 00:09:17.688 ************************************ 00:09:17.688 13:40:48 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:17.688 13:40:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.688 13:40:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:17.688 13:40:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:17.688 13:40:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:17.688 13:40:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:17.688 13:40:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:17.688 13:40:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:17.688 13:40:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2122208 00:09:17.688 13:40:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:17.688 13:40:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:17.688 13:40:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2122208' 00:09:17.688 Process app_repeat pid: 2122208 00:09:17.688 13:40:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:17.688 13:40:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:17.688 spdk_app_start Round 0 00:09:17.688 13:40:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2122208 /var/tmp/spdk-nbd.sock 00:09:17.688 13:40:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2122208 ']' 00:09:17.688 13:40:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:17.688 13:40:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.688 13:40:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:17.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:17.688 13:40:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.688 13:40:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:17.688 [2024-12-05 13:40:49.022403] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:17.688 [2024-12-05 13:40:49.022481] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122208 ] 00:09:17.688 [2024-12-05 13:40:49.086170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:17.688 [2024-12-05 13:40:49.138462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.688 [2024-12-05 13:40:49.138467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.945 13:40:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.945 13:40:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:17.945 13:40:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:18.203 Malloc0 00:09:18.203 13:40:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:18.462 Malloc1 00:09:18.462 13:40:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:18.462 13:40:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:18.720 /dev/nbd0 00:09:18.720 13:40:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:18.720 13:40:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:18.720 13:40:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:18.720 13:40:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:18.720 13:40:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:18.720 13:40:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:18.720 13:40:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:18.720 13:40:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:18.720 13:40:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:18.720 13:40:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:18.720 13:40:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:18.720 1+0 records in 00:09:18.720 1+0 records out 00:09:18.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186074 s, 22.0 MB/s 00:09:18.720 13:40:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:18.720 13:40:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:18.720 13:40:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:18.720 13:40:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:18.720 13:40:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:18.720 13:40:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:18.720 13:40:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:18.720 13:40:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:18.978 /dev/nbd1 00:09:18.978 13:40:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:18.978 13:40:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:18.978 13:40:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:18.978 13:40:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:18.978 13:40:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:18.978 13:40:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:18.978 13:40:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:18.978 13:40:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:18.978 13:40:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:18.978 13:40:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:18.978 13:40:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:18.978 1+0 records in 00:09:18.978 1+0 records out 00:09:18.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198776 s, 20.6 MB/s 00:09:18.978 13:40:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:18.978 13:40:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:18.978 13:40:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:18.978 13:40:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:18.978 13:40:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:18.978 13:40:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:18.978 13:40:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:19.235 13:40:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:19.235 13:40:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.235 13:40:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:19.492 { 00:09:19.492 "nbd_device": "/dev/nbd0", 00:09:19.492 "bdev_name": "Malloc0" 00:09:19.492 }, 00:09:19.492 { 00:09:19.492 "nbd_device": "/dev/nbd1", 00:09:19.492 "bdev_name": "Malloc1" 00:09:19.492 } 00:09:19.492 ]' 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:19.492 { 00:09:19.492 "nbd_device": "/dev/nbd0", 00:09:19.492 "bdev_name": "Malloc0" 00:09:19.492 }, 00:09:19.492 { 00:09:19.492 "nbd_device": "/dev/nbd1", 00:09:19.492 "bdev_name": "Malloc1" 00:09:19.492 } 00:09:19.492 ]' 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:19.492 /dev/nbd1' 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:19.492 /dev/nbd1' 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:19.492 256+0 records in 00:09:19.492 256+0 records out 00:09:19.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508818 s, 206 MB/s 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:19.492 256+0 records in 00:09:19.492 256+0 records out 00:09:19.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199682 s, 52.5 MB/s 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:19.492 256+0 records in 00:09:19.492 256+0 records out 00:09:19.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241727 s, 43.4 MB/s 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:19.492 13:40:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:19.493 13:40:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:19.493 13:40:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.493 13:40:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:19.750 13:40:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:19.750 13:40:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:19.750 13:40:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:19.750 13:40:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.750 13:40:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.750 13:40:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:19.750 13:40:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:19.750 13:40:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.750 13:40:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.750 13:40:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:20.007 13:40:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:20.007 13:40:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:20.007 13:40:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:20.007 13:40:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.007 13:40:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.007 13:40:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:20.007 13:40:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:20.007 13:40:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.007 13:40:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:20.007 13:40:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.007 13:40:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:20.263 13:40:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:20.263 13:40:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:20.263 13:40:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:20.519 13:40:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:20.519 13:40:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:20.519 13:40:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:20.519 13:40:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:20.519 13:40:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:20.519 13:40:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:20.519 13:40:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:20.519 13:40:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:20.519 13:40:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:20.519 13:40:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:20.776 13:40:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:21.034 [2024-12-05 13:40:52.304136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:21.034 [2024-12-05 13:40:52.355649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.034 [2024-12-05 13:40:52.355649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.034 [2024-12-05 13:40:52.413627] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:21.034 [2024-12-05 13:40:52.413695] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:24.311 13:40:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:24.311 13:40:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:24.311 spdk_app_start Round 1 00:09:24.311 13:40:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2122208 /var/tmp/spdk-nbd.sock 00:09:24.311 13:40:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2122208 ']' 00:09:24.311 13:40:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:24.311 13:40:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.311 13:40:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:24.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:24.311 13:40:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.311 13:40:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:24.311 13:40:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.311 13:40:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:24.311 13:40:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:24.311 Malloc0 00:09:24.311 13:40:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:24.569 Malloc1 00:09:24.569 13:40:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:24.569 13:40:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:24.827 /dev/nbd0 00:09:24.827 13:40:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:24.827 13:40:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:24.827 13:40:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:24.827 13:40:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:24.827 13:40:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:24.827 13:40:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:24.827 13:40:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:24.827 13:40:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:24.827 13:40:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:24.827 13:40:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:24.827 13:40:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:24.827 1+0 records in 00:09:24.827 1+0 records out 00:09:24.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165354 s, 24.8 MB/s 00:09:24.827 13:40:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:24.827 13:40:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:24.827 13:40:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:24.827 13:40:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:24.827 13:40:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:24.827 13:40:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:24.827 13:40:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:24.827 13:40:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:25.087 /dev/nbd1 00:09:25.087 13:40:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:25.087 13:40:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:25.088 13:40:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:25.088 13:40:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:25.088 13:40:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:25.088 13:40:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:25.088 13:40:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:25.088 13:40:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:25.088 13:40:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:25.088 13:40:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:25.088 13:40:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:25.088 1+0 records in 00:09:25.088 1+0 records out 00:09:25.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257538 s, 15.9 MB/s 00:09:25.088 13:40:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:25.088 13:40:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:25.088 13:40:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:25.088 13:40:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:25.088 13:40:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:25.088 13:40:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:25.088 13:40:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:25.088 13:40:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:25.088 13:40:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.088 13:40:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:25.345 13:40:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:25.345 { 00:09:25.345 "nbd_device": "/dev/nbd0", 00:09:25.345 "bdev_name": "Malloc0" 00:09:25.345 }, 00:09:25.345 { 00:09:25.345 "nbd_device": "/dev/nbd1", 00:09:25.345 "bdev_name": "Malloc1" 00:09:25.345 } 00:09:25.345 ]' 00:09:25.345 13:40:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:25.345 { 00:09:25.345 "nbd_device": "/dev/nbd0", 00:09:25.345 "bdev_name": "Malloc0" 00:09:25.345 }, 00:09:25.345 { 00:09:25.345 "nbd_device": "/dev/nbd1", 00:09:25.345 "bdev_name": "Malloc1" 00:09:25.345 } 00:09:25.345 ]' 00:09:25.345 13:40:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:25.604 /dev/nbd1' 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:25.604 /dev/nbd1' 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:25.604 256+0 records in 00:09:25.604 256+0 records out 00:09:25.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051408 s, 204 MB/s 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:25.604 256+0 records in 00:09:25.604 256+0 records out 00:09:25.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201355 s, 52.1 MB/s 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:25.604 256+0 records in 00:09:25.604 256+0 records out 00:09:25.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220392 s, 47.6 MB/s 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:25.604 13:40:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:25.862 13:40:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:25.862 13:40:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:25.862 13:40:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:25.862 13:40:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:25.862 13:40:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:25.862 13:40:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:25.862 13:40:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:25.862 13:40:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:25.862 13:40:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:25.862 13:40:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:26.119 13:40:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:26.119 13:40:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:26.119 13:40:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:26.119 13:40:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.119 13:40:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.119 13:40:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:26.119 13:40:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:26.119 13:40:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.119 13:40:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:26.119 13:40:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.119 13:40:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:26.378 13:40:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:26.378 13:40:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:26.378 13:40:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:26.378 13:40:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:26.378 13:40:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:26.378 13:40:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:26.378 13:40:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:26.378 13:40:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:26.378 13:40:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:26.378 13:40:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:26.378 13:40:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:26.378 13:40:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:26.378 13:40:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:26.636 13:40:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:26.894 [2024-12-05 13:40:58.372248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.164 [2024-12-05 13:40:58.424881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.165 [2024-12-05 13:40:58.424881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.165 [2024-12-05 13:40:58.480076] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:27.165 [2024-12-05 13:40:58.480144] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:29.701 13:41:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:29.701 13:41:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:29.701 spdk_app_start Round 2 00:09:29.701 13:41:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2122208 /var/tmp/spdk-nbd.sock 00:09:29.701 13:41:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2122208 ']' 00:09:29.701 13:41:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:29.701 13:41:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.701 13:41:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:29.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:29.701 13:41:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.701 13:41:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:29.958 13:41:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.958 13:41:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:29.958 13:41:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:30.258 Malloc0 00:09:30.258 13:41:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:30.545 Malloc1 00:09:30.545 13:41:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:30.545 13:41:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:30.803 /dev/nbd0 00:09:30.803 13:41:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:30.803 13:41:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:30.803 13:41:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:30.803 13:41:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:30.803 13:41:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:30.803 13:41:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:30.803 13:41:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:30.803 13:41:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:30.803 13:41:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:31.068 13:41:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:31.068 13:41:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:31.068 1+0 records in 00:09:31.068 1+0 records out 00:09:31.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179821 s, 22.8 MB/s 00:09:31.068 13:41:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:31.068 13:41:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:31.068 13:41:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:31.068 13:41:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:31.068 13:41:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:31.068 13:41:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.068 13:41:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:31.068 13:41:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:31.326 /dev/nbd1 00:09:31.326 13:41:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:31.326 13:41:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:31.326 13:41:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:31.326 13:41:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:31.326 13:41:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:31.326 13:41:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:31.326 13:41:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:31.326 13:41:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:31.326 13:41:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:31.326 13:41:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:31.326 13:41:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:31.326 1+0 records in 00:09:31.326 1+0 records out 00:09:31.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198285 s, 20.7 MB/s 00:09:31.326 13:41:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:31.326 13:41:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:31.326 13:41:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:31.326 13:41:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:31.326 13:41:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:31.326 13:41:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.326 13:41:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:31.326 13:41:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:31.327 13:41:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.327 13:41:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:31.585 13:41:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:31.585 { 00:09:31.585 "nbd_device": "/dev/nbd0", 00:09:31.585 "bdev_name": "Malloc0" 00:09:31.585 }, 00:09:31.585 { 00:09:31.585 "nbd_device": "/dev/nbd1", 00:09:31.585 "bdev_name": "Malloc1" 00:09:31.585 } 00:09:31.585 ]' 00:09:31.585 13:41:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:31.585 { 00:09:31.585 "nbd_device": "/dev/nbd0", 00:09:31.585 "bdev_name": "Malloc0" 00:09:31.585 }, 00:09:31.585 { 00:09:31.585 "nbd_device": "/dev/nbd1", 00:09:31.585 "bdev_name": "Malloc1" 00:09:31.585 } 00:09:31.585 ]' 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:31.586 /dev/nbd1' 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:31.586 /dev/nbd1' 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:31.586 256+0 records in 00:09:31.586 256+0 records out 00:09:31.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497228 s, 211 MB/s 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.586 13:41:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:31.586 256+0 records in 00:09:31.586 256+0 records out 00:09:31.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203881 s, 51.4 MB/s 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:31.586 256+0 records in 00:09:31.586 256+0 records out 00:09:31.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221326 s, 47.4 MB/s 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:31.586 13:41:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:31.843 13:41:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:31.844 13:41:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:31.844 13:41:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:31.844 13:41:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:31.844 13:41:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:31.844 13:41:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:31.844 13:41:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:31.844 13:41:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:31.844 13:41:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:31.844 13:41:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:32.410 13:41:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:32.410 13:41:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:32.410 13:41:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:32.410 13:41:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:32.410 13:41:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.410 13:41:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:32.410 13:41:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:32.410 13:41:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:32.410 13:41:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:32.410 13:41:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.410 13:41:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:32.410 13:41:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:32.410 13:41:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:32.410 13:41:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:32.668 13:41:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:32.668 13:41:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:32.668 13:41:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:32.668 13:41:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:32.668 13:41:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:32.668 13:41:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:32.668 13:41:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:32.668 13:41:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:32.668 13:41:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:32.668 13:41:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:32.927 13:41:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:33.186 [2024-12-05 13:41:04.461882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:33.186 [2024-12-05 13:41:04.514182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.186 [2024-12-05 13:41:04.514185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.186 [2024-12-05 13:41:04.572932] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:33.186 [2024-12-05 13:41:04.572999] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:36.467 13:41:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2122208 /var/tmp/spdk-nbd.sock 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2122208 ']' 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:36.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:36.467 13:41:07 event.app_repeat -- event/event.sh@39 -- # killprocess 2122208 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2122208 ']' 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2122208 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2122208 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2122208' 00:09:36.467 killing process with pid 2122208 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2122208 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2122208 00:09:36.467 spdk_app_start is called in Round 0. 00:09:36.467 Shutdown signal received, stop current app iteration 00:09:36.467 Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 reinitialization... 00:09:36.467 spdk_app_start is called in Round 1. 00:09:36.467 Shutdown signal received, stop current app iteration 00:09:36.467 Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 reinitialization... 00:09:36.467 spdk_app_start is called in Round 2. 00:09:36.467 Shutdown signal received, stop current app iteration 00:09:36.467 Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 reinitialization... 00:09:36.467 spdk_app_start is called in Round 3. 00:09:36.467 Shutdown signal received, stop current app iteration 00:09:36.467 13:41:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:36.467 13:41:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:36.467 00:09:36.467 real 0m18.783s 00:09:36.467 user 0m41.662s 00:09:36.467 sys 0m3.244s 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.467 13:41:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:36.467 ************************************ 00:09:36.467 END TEST app_repeat 00:09:36.467 ************************************ 00:09:36.467 13:41:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:36.467 13:41:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:36.467 13:41:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.467 13:41:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.467 13:41:07 event -- common/autotest_common.sh@10 -- # set +x 00:09:36.467 ************************************ 00:09:36.467 START TEST cpu_locks 00:09:36.467 ************************************ 00:09:36.467 13:41:07 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:36.467 * Looking for test storage... 00:09:36.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:36.467 13:41:07 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:36.467 13:41:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:09:36.467 13:41:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:36.467 13:41:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.467 13:41:07 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:36.467 13:41:07 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.467 13:41:07 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:36.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.468 --rc genhtml_branch_coverage=1 00:09:36.468 --rc genhtml_function_coverage=1 00:09:36.468 --rc genhtml_legend=1 00:09:36.468 --rc geninfo_all_blocks=1 00:09:36.468 --rc geninfo_unexecuted_blocks=1 00:09:36.468 00:09:36.468 ' 00:09:36.468 13:41:07 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:36.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.468 --rc genhtml_branch_coverage=1 00:09:36.468 --rc genhtml_function_coverage=1 00:09:36.468 --rc genhtml_legend=1 00:09:36.468 --rc geninfo_all_blocks=1 00:09:36.468 --rc geninfo_unexecuted_blocks=1 00:09:36.468 00:09:36.468 ' 00:09:36.468 13:41:07 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:36.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.468 --rc genhtml_branch_coverage=1 00:09:36.468 --rc genhtml_function_coverage=1 00:09:36.468 --rc genhtml_legend=1 00:09:36.468 --rc geninfo_all_blocks=1 00:09:36.468 --rc geninfo_unexecuted_blocks=1 00:09:36.468 00:09:36.468 ' 00:09:36.468 13:41:07 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:36.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.468 --rc genhtml_branch_coverage=1 00:09:36.468 --rc genhtml_function_coverage=1 00:09:36.468 --rc genhtml_legend=1 00:09:36.468 --rc geninfo_all_blocks=1 00:09:36.468 --rc geninfo_unexecuted_blocks=1 00:09:36.468 00:09:36.468 ' 00:09:36.468 13:41:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:36.468 13:41:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:36.468 13:41:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:36.468 13:41:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:36.468 13:41:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.468 13:41:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.468 13:41:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:36.726 ************************************ 00:09:36.726 START TEST default_locks 00:09:36.726 ************************************ 00:09:36.726 13:41:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:36.726 13:41:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2124700 00:09:36.726 13:41:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:36.726 13:41:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2124700 00:09:36.726 13:41:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2124700 ']' 00:09:36.726 13:41:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.726 13:41:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.726 13:41:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.726 13:41:07 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.726 13:41:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:36.726 [2024-12-05 13:41:08.046933] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:36.726 [2024-12-05 13:41:08.047026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124700 ] 00:09:36.726 [2024-12-05 13:41:08.114594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.726 [2024-12-05 13:41:08.167737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.984 13:41:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.984 13:41:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:36.984 13:41:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2124700 00:09:36.984 13:41:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2124700 00:09:36.984 13:41:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:37.241 lslocks: write error 00:09:37.241 13:41:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2124700 00:09:37.241 13:41:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2124700 ']' 00:09:37.241 13:41:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2124700 00:09:37.241 13:41:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:37.241 13:41:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.241 13:41:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2124700 00:09:37.241 13:41:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.241 13:41:08 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.241 13:41:08 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2124700' 00:09:37.241 killing process with pid 2124700 00:09:37.241 13:41:08 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2124700 00:09:37.241 13:41:08 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2124700 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2124700 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2124700 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2124700 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2124700 ']' 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:37.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2124700) - No such process 00:09:37.806 ERROR: process (pid: 2124700) is no longer running 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:37.806 00:09:37.806 real 0m1.080s 00:09:37.806 user 0m1.055s 00:09:37.806 sys 0m0.468s 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.806 13:41:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:37.806 ************************************ 00:09:37.806 END TEST default_locks 00:09:37.806 ************************************ 00:09:37.806 13:41:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:37.806 13:41:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.806 13:41:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.806 13:41:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:37.806 ************************************ 00:09:37.806 START TEST default_locks_via_rpc 00:09:37.806 ************************************ 00:09:37.806 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:37.806 13:41:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2124862 00:09:37.806 13:41:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:37.806 13:41:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2124862 00:09:37.806 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2124862 ']' 00:09:37.806 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.806 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.806 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.807 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.807 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.807 [2024-12-05 13:41:09.184282] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:37.807 [2024-12-05 13:41:09.184375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124862 ] 00:09:37.807 [2024-12-05 13:41:09.250606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.807 [2024-12-05 13:41:09.308951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.065 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.065 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:38.065 13:41:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:38.065 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.065 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.065 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.065 13:41:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:38.065 13:41:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:38.065 13:41:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:38.065 13:41:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:38.065 13:41:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:38.065 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.065 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2124862 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2124862 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2124862 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2124862 ']' 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2124862 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2124862 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2124862' 00:09:38.322 killing process with pid 2124862 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2124862 00:09:38.322 13:41:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2124862 00:09:38.889 00:09:38.889 real 0m1.138s 00:09:38.889 user 0m1.106s 00:09:38.889 sys 0m0.499s 00:09:38.889 13:41:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.889 13:41:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.889 ************************************ 00:09:38.889 END TEST default_locks_via_rpc 00:09:38.889 ************************************ 00:09:38.889 13:41:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:38.889 13:41:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.889 13:41:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.889 13:41:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:38.889 ************************************ 00:09:38.889 START TEST non_locking_app_on_locked_coremask 00:09:38.889 ************************************ 00:09:38.889 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:38.889 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2125024 00:09:38.889 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:38.889 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2125024 /var/tmp/spdk.sock 00:09:38.889 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2125024 ']' 00:09:38.889 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.889 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.889 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.889 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.889 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:38.889 [2024-12-05 13:41:10.374356] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:38.889 [2024-12-05 13:41:10.374479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125024 ] 00:09:39.147 [2024-12-05 13:41:10.441442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.147 [2024-12-05 13:41:10.499504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.404 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.405 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:39.405 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2125040 00:09:39.405 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:39.405 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2125040 /var/tmp/spdk2.sock 00:09:39.405 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2125040 ']' 00:09:39.405 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:39.405 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.405 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:39.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:39.405 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.405 13:41:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:39.405 [2024-12-05 13:41:10.828224] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:39.405 [2024-12-05 13:41:10.828309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125040 ] 00:09:39.663 [2024-12-05 13:41:10.931422] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:39.663 [2024-12-05 13:41:10.931451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.663 [2024-12-05 13:41:11.040817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.596 13:41:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.597 13:41:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:40.597 13:41:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2125024 00:09:40.597 13:41:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2125024 00:09:40.597 13:41:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:40.859 lslocks: write error 00:09:40.859 13:41:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2125024 00:09:40.859 13:41:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2125024 ']' 00:09:40.859 13:41:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2125024 00:09:40.859 13:41:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:40.859 13:41:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.859 13:41:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2125024 00:09:40.859 13:41:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.859 13:41:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.859 13:41:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2125024' 00:09:40.859 killing process with pid 2125024 00:09:40.859 13:41:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2125024 00:09:40.859 13:41:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2125024 00:09:41.800 13:41:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2125040 00:09:41.800 13:41:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2125040 ']' 00:09:41.800 13:41:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2125040 00:09:41.800 13:41:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:41.800 13:41:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.800 13:41:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2125040 00:09:41.800 13:41:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.800 13:41:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.800 13:41:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2125040' 00:09:41.800 killing process with pid 2125040 00:09:41.800 13:41:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2125040 00:09:41.800 13:41:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2125040 00:09:42.060 00:09:42.060 real 0m3.234s 00:09:42.060 user 0m3.476s 00:09:42.060 sys 0m1.029s 00:09:42.060 13:41:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.060 13:41:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:42.060 ************************************ 00:09:42.060 END TEST non_locking_app_on_locked_coremask 00:09:42.060 ************************************ 00:09:42.060 13:41:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:42.060 13:41:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.060 13:41:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.060 13:41:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:42.321 ************************************ 00:09:42.321 START TEST locking_app_on_unlocked_coremask 00:09:42.321 ************************************ 00:09:42.321 13:41:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:42.321 13:41:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2125456 00:09:42.321 13:41:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:42.321 13:41:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2125456 /var/tmp/spdk.sock 00:09:42.321 13:41:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2125456 ']' 00:09:42.321 13:41:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.321 13:41:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.321 13:41:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.321 13:41:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.321 13:41:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:42.321 [2024-12-05 13:41:13.660793] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:42.321 [2024-12-05 13:41:13.660885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125456 ] 00:09:42.321 [2024-12-05 13:41:13.726284] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:42.321 [2024-12-05 13:41:13.726324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.321 [2024-12-05 13:41:13.784586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.580 13:41:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.580 13:41:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:42.580 13:41:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2125467 00:09:42.580 13:41:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:42.580 13:41:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2125467 /var/tmp/spdk2.sock 00:09:42.580 13:41:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2125467 ']' 00:09:42.580 13:41:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:42.580 13:41:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.580 13:41:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:42.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:42.580 13:41:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.580 13:41:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:42.840 [2024-12-05 13:41:14.109014] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:42.840 [2024-12-05 13:41:14.109102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125467 ] 00:09:42.840 [2024-12-05 13:41:14.216236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.840 [2024-12-05 13:41:14.322481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.776 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.776 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:43.776 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2125467 00:09:43.776 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2125467 00:09:43.776 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:44.035 lslocks: write error 00:09:44.035 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2125456 00:09:44.035 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2125456 ']' 00:09:44.035 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2125456 00:09:44.035 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:44.035 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.035 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2125456 00:09:44.035 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.035 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.035 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2125456' 00:09:44.035 killing process with pid 2125456 00:09:44.035 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2125456 00:09:44.035 13:41:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2125456 00:09:44.976 13:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2125467 00:09:44.976 13:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2125467 ']' 00:09:44.976 13:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2125467 00:09:44.976 13:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:44.976 13:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.976 13:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2125467 00:09:44.976 13:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.976 13:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.976 13:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2125467' 00:09:44.976 killing process with pid 2125467 00:09:44.976 13:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2125467 00:09:44.976 13:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2125467 00:09:45.545 00:09:45.545 real 0m3.168s 00:09:45.545 user 0m3.422s 00:09:45.545 sys 0m1.016s 00:09:45.545 13:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.545 13:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:45.545 ************************************ 00:09:45.545 END TEST locking_app_on_unlocked_coremask 00:09:45.545 ************************************ 00:09:45.545 13:41:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:45.545 13:41:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.545 13:41:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.545 13:41:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:45.545 ************************************ 00:09:45.545 START TEST locking_app_on_locked_coremask 00:09:45.545 ************************************ 00:09:45.545 13:41:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:45.545 13:41:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2125887 00:09:45.545 13:41:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:45.545 13:41:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2125887 /var/tmp/spdk.sock 00:09:45.545 13:41:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2125887 ']' 00:09:45.545 13:41:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.545 13:41:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.545 13:41:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.545 13:41:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.545 13:41:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:45.545 [2024-12-05 13:41:16.880234] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:45.545 [2024-12-05 13:41:16.880334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125887 ] 00:09:45.545 [2024-12-05 13:41:16.944521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.545 [2024-12-05 13:41:16.998274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2125901 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2125901 /var/tmp/spdk2.sock 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2125901 /var/tmp/spdk2.sock 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2125901 /var/tmp/spdk2.sock 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2125901 ']' 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:45.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.803 13:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:45.803 [2024-12-05 13:41:17.317892] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:45.803 [2024-12-05 13:41:17.317984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125901 ] 00:09:46.062 [2024-12-05 13:41:17.418214] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2125887 has claimed it. 00:09:46.062 [2024-12-05 13:41:17.418282] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:46.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2125901) - No such process 00:09:46.632 ERROR: process (pid: 2125901) is no longer running 00:09:46.632 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.632 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:46.632 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:46.632 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:46.632 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:46.632 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:46.632 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2125887 00:09:46.632 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2125887 00:09:46.632 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:47.201 lslocks: write error 00:09:47.201 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2125887 00:09:47.201 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2125887 ']' 00:09:47.201 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2125887 00:09:47.201 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:47.201 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.201 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2125887 00:09:47.201 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.201 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.201 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2125887' 00:09:47.201 killing process with pid 2125887 00:09:47.201 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2125887 00:09:47.201 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2125887 00:09:47.460 00:09:47.460 real 0m2.070s 00:09:47.460 user 0m2.297s 00:09:47.460 sys 0m0.647s 00:09:47.460 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.460 13:41:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:47.460 ************************************ 00:09:47.460 END TEST locking_app_on_locked_coremask 00:09:47.460 ************************************ 00:09:47.460 13:41:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:47.460 13:41:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.460 13:41:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.460 13:41:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:47.460 ************************************ 00:09:47.460 START TEST locking_overlapped_coremask 00:09:47.460 ************************************ 00:09:47.460 13:41:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:47.460 13:41:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2126185 00:09:47.460 13:41:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:47.460 13:41:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2126185 /var/tmp/spdk.sock 00:09:47.460 13:41:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2126185 ']' 00:09:47.460 13:41:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.460 13:41:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.460 13:41:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.460 13:41:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.460 13:41:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:47.718 [2024-12-05 13:41:19.000579] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:47.718 [2024-12-05 13:41:19.000662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126185 ] 00:09:47.718 [2024-12-05 13:41:19.065147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.718 [2024-12-05 13:41:19.117789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.718 [2024-12-05 13:41:19.117850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.718 [2024-12-05 13:41:19.117853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2126202 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2126202 /var/tmp/spdk2.sock 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2126202 /var/tmp/spdk2.sock 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2126202 /var/tmp/spdk2.sock 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2126202 ']' 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:47.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.977 13:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:47.977 [2024-12-05 13:41:19.452845] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:47.977 [2024-12-05 13:41:19.452937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126202 ] 00:09:48.235 [2024-12-05 13:41:19.557793] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2126185 has claimed it. 00:09:48.235 [2024-12-05 13:41:19.557864] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:48.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2126202) - No such process 00:09:48.804 ERROR: process (pid: 2126202) is no longer running 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2126185 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2126185 ']' 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2126185 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2126185 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2126185' 00:09:48.804 killing process with pid 2126185 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2126185 00:09:48.804 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2126185 00:09:49.373 00:09:49.373 real 0m1.679s 00:09:49.373 user 0m4.685s 00:09:49.373 sys 0m0.459s 00:09:49.373 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.373 13:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:49.373 ************************************ 00:09:49.373 END TEST locking_overlapped_coremask 00:09:49.373 ************************************ 00:09:49.373 13:41:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:49.373 13:41:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.373 13:41:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.373 13:41:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:49.373 ************************************ 00:09:49.373 START TEST locking_overlapped_coremask_via_rpc 00:09:49.373 ************************************ 00:09:49.373 13:41:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:49.373 13:41:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2126364 00:09:49.373 13:41:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:49.373 13:41:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2126364 /var/tmp/spdk.sock 00:09:49.373 13:41:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2126364 ']' 00:09:49.373 13:41:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.373 13:41:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.373 13:41:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.373 13:41:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.373 13:41:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.373 [2024-12-05 13:41:20.732763] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:49.373 [2024-12-05 13:41:20.732861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126364 ] 00:09:49.373 [2024-12-05 13:41:20.795658] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:49.373 [2024-12-05 13:41:20.795698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:49.373 [2024-12-05 13:41:20.857244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.373 [2024-12-05 13:41:20.857284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.373 [2024-12-05 13:41:20.857288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.631 13:41:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.631 13:41:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:49.631 13:41:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2126495 00:09:49.631 13:41:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:49.631 13:41:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2126495 /var/tmp/spdk2.sock 00:09:49.631 13:41:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2126495 ']' 00:09:49.631 13:41:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:49.631 13:41:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.631 13:41:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:49.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:49.631 13:41:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.631 13:41:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.890 [2024-12-05 13:41:21.190027] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:49.890 [2024-12-05 13:41:21.190116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126495 ] 00:09:49.890 [2024-12-05 13:41:21.293389] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:49.890 [2024-12-05 13:41:21.293450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:49.890 [2024-12-05 13:41:21.408372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.890 [2024-12-05 13:41:21.411501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:49.890 [2024-12-05 13:41:21.411504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.823 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.823 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:50.823 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:50.823 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.823 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.823 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.823 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:50.823 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:50.823 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:50.823 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:50.823 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:50.823 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.824 [2024-12-05 13:41:22.203521] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2126364 has claimed it. 00:09:50.824 request: 00:09:50.824 { 00:09:50.824 "method": "framework_enable_cpumask_locks", 00:09:50.824 "req_id": 1 00:09:50.824 } 00:09:50.824 Got JSON-RPC error response 00:09:50.824 response: 00:09:50.824 { 00:09:50.824 "code": -32603, 00:09:50.824 "message": "Failed to claim CPU core: 2" 00:09:50.824 } 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2126364 /var/tmp/spdk.sock 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2126364 ']' 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.824 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.083 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.083 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:51.083 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2126495 /var/tmp/spdk2.sock 00:09:51.083 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2126495 ']' 00:09:51.083 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:51.083 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.083 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:51.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:51.083 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.083 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.343 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.343 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:51.343 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:51.343 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:51.343 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:51.343 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:51.343 00:09:51.343 real 0m2.070s 00:09:51.343 user 0m1.110s 00:09:51.343 sys 0m0.218s 00:09:51.343 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.343 13:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.343 ************************************ 00:09:51.343 END TEST locking_overlapped_coremask_via_rpc 00:09:51.343 ************************************ 00:09:51.343 13:41:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:51.343 13:41:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2126364 ]] 00:09:51.343 13:41:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2126364 00:09:51.343 13:41:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2126364 ']' 00:09:51.343 13:41:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2126364 00:09:51.343 13:41:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:51.343 13:41:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.343 13:41:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2126364 00:09:51.343 13:41:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.343 13:41:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.343 13:41:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2126364' 00:09:51.343 killing process with pid 2126364 00:09:51.343 13:41:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2126364 00:09:51.343 13:41:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2126364 00:09:51.910 13:41:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2126495 ]] 00:09:51.910 13:41:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2126495 00:09:51.910 13:41:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2126495 ']' 00:09:51.910 13:41:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2126495 00:09:51.910 13:41:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:51.910 13:41:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.910 13:41:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2126495 00:09:51.911 13:41:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:51.911 13:41:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:51.911 13:41:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2126495' 00:09:51.911 killing process with pid 2126495 00:09:51.911 13:41:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2126495 00:09:51.911 13:41:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2126495 00:09:52.479 13:41:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:52.479 13:41:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:52.479 13:41:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2126364 ]] 00:09:52.479 13:41:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2126364 00:09:52.479 13:41:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2126364 ']' 00:09:52.479 13:41:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2126364 00:09:52.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2126364) - No such process 00:09:52.479 13:41:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2126364 is not found' 00:09:52.479 Process with pid 2126364 is not found 00:09:52.479 13:41:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2126495 ]] 00:09:52.479 13:41:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2126495 00:09:52.479 13:41:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2126495 ']' 00:09:52.479 13:41:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2126495 00:09:52.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2126495) - No such process 00:09:52.479 13:41:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2126495 is not found' 00:09:52.479 Process with pid 2126495 is not found 00:09:52.479 13:41:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:52.479 00:09:52.479 real 0m15.887s 00:09:52.479 user 0m29.002s 00:09:52.479 sys 0m5.272s 00:09:52.479 13:41:23 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.479 13:41:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:52.479 ************************************ 00:09:52.479 END TEST cpu_locks 00:09:52.479 ************************************ 00:09:52.479 00:09:52.479 real 0m40.566s 00:09:52.479 user 1m19.767s 00:09:52.479 sys 0m9.343s 00:09:52.479 13:41:23 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.479 13:41:23 event -- common/autotest_common.sh@10 -- # set +x 00:09:52.479 ************************************ 00:09:52.479 END TEST event 00:09:52.479 ************************************ 00:09:52.479 13:41:23 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:52.479 13:41:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.479 13:41:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.479 13:41:23 -- common/autotest_common.sh@10 -- # set +x 00:09:52.479 ************************************ 00:09:52.479 START TEST thread 00:09:52.479 ************************************ 00:09:52.479 13:41:23 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:52.479 * Looking for test storage... 00:09:52.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:09:52.479 13:41:23 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:52.479 13:41:23 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:09:52.479 13:41:23 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:52.479 13:41:23 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:52.479 13:41:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.479 13:41:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.479 13:41:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.479 13:41:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.479 13:41:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.479 13:41:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.479 13:41:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.479 13:41:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.479 13:41:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.479 13:41:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.480 13:41:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.480 13:41:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:52.480 13:41:23 thread -- scripts/common.sh@345 -- # : 1 00:09:52.480 13:41:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.480 13:41:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.480 13:41:23 thread -- scripts/common.sh@365 -- # decimal 1 00:09:52.480 13:41:23 thread -- scripts/common.sh@353 -- # local d=1 00:09:52.480 13:41:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.480 13:41:23 thread -- scripts/common.sh@355 -- # echo 1 00:09:52.480 13:41:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.480 13:41:23 thread -- scripts/common.sh@366 -- # decimal 2 00:09:52.480 13:41:23 thread -- scripts/common.sh@353 -- # local d=2 00:09:52.480 13:41:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.480 13:41:23 thread -- scripts/common.sh@355 -- # echo 2 00:09:52.480 13:41:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.480 13:41:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.480 13:41:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.480 13:41:23 thread -- scripts/common.sh@368 -- # return 0 00:09:52.480 13:41:23 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.480 13:41:23 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:52.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.480 --rc genhtml_branch_coverage=1 00:09:52.480 --rc genhtml_function_coverage=1 00:09:52.480 --rc genhtml_legend=1 00:09:52.480 --rc geninfo_all_blocks=1 00:09:52.480 --rc geninfo_unexecuted_blocks=1 00:09:52.480 00:09:52.480 ' 00:09:52.480 13:41:23 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:52.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.480 --rc genhtml_branch_coverage=1 00:09:52.480 --rc genhtml_function_coverage=1 00:09:52.480 --rc genhtml_legend=1 00:09:52.480 --rc geninfo_all_blocks=1 00:09:52.480 --rc geninfo_unexecuted_blocks=1 00:09:52.480 00:09:52.480 ' 00:09:52.480 13:41:23 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:52.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.480 --rc genhtml_branch_coverage=1 00:09:52.480 --rc genhtml_function_coverage=1 00:09:52.480 --rc genhtml_legend=1 00:09:52.480 --rc geninfo_all_blocks=1 00:09:52.480 --rc geninfo_unexecuted_blocks=1 00:09:52.480 00:09:52.480 ' 00:09:52.480 13:41:23 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:52.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.480 --rc genhtml_branch_coverage=1 00:09:52.480 --rc genhtml_function_coverage=1 00:09:52.480 --rc genhtml_legend=1 00:09:52.480 --rc geninfo_all_blocks=1 00:09:52.480 --rc geninfo_unexecuted_blocks=1 00:09:52.480 00:09:52.480 ' 00:09:52.480 13:41:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:52.480 13:41:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:52.480 13:41:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.480 13:41:23 thread -- common/autotest_common.sh@10 -- # set +x 00:09:52.480 ************************************ 00:09:52.480 START TEST thread_poller_perf 00:09:52.480 ************************************ 00:09:52.480 13:41:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:52.480 [2024-12-05 13:41:23.974158] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:52.480 [2024-12-05 13:41:23.974220] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126870 ] 00:09:52.737 [2024-12-05 13:41:24.039293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.737 [2024-12-05 13:41:24.095021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.737 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:53.678 [2024-12-05T12:41:25.204Z] ====================================== 00:09:53.678 [2024-12-05T12:41:25.204Z] busy:2711903406 (cyc) 00:09:53.678 [2024-12-05T12:41:25.204Z] total_run_count: 365000 00:09:53.678 [2024-12-05T12:41:25.204Z] tsc_hz: 2700000000 (cyc) 00:09:53.678 [2024-12-05T12:41:25.204Z] ====================================== 00:09:53.678 [2024-12-05T12:41:25.204Z] poller_cost: 7429 (cyc), 2751 (nsec) 00:09:53.678 00:09:53.678 real 0m1.202s 00:09:53.678 user 0m1.136s 00:09:53.678 sys 0m0.062s 00:09:53.678 13:41:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.678 13:41:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:53.678 ************************************ 00:09:53.678 END TEST thread_poller_perf 00:09:53.678 ************************************ 00:09:53.678 13:41:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:53.678 13:41:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:53.678 13:41:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.678 13:41:25 thread -- common/autotest_common.sh@10 -- # set +x 00:09:53.940 ************************************ 00:09:53.940 START TEST thread_poller_perf 00:09:53.940 ************************************ 00:09:53.940 13:41:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:53.940 [2024-12-05 13:41:25.227855] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:53.940 [2024-12-05 13:41:25.227918] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127031 ] 00:09:53.940 [2024-12-05 13:41:25.295911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.940 [2024-12-05 13:41:25.347778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.940 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:55.320 [2024-12-05T12:41:26.846Z] ====================================== 00:09:55.320 [2024-12-05T12:41:26.846Z] busy:2702165016 (cyc) 00:09:55.320 [2024-12-05T12:41:26.846Z] total_run_count: 4904000 00:09:55.320 [2024-12-05T12:41:26.846Z] tsc_hz: 2700000000 (cyc) 00:09:55.320 [2024-12-05T12:41:26.846Z] ====================================== 00:09:55.320 [2024-12-05T12:41:26.846Z] poller_cost: 551 (cyc), 204 (nsec) 00:09:55.320 00:09:55.320 real 0m1.193s 00:09:55.320 user 0m1.126s 00:09:55.320 sys 0m0.061s 00:09:55.320 13:41:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.320 13:41:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:55.320 ************************************ 00:09:55.320 END TEST thread_poller_perf 00:09:55.320 ************************************ 00:09:55.320 13:41:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:55.320 00:09:55.320 real 0m2.634s 00:09:55.320 user 0m2.392s 00:09:55.320 sys 0m0.246s 00:09:55.320 13:41:26 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.320 13:41:26 thread -- common/autotest_common.sh@10 -- # set +x 00:09:55.320 ************************************ 00:09:55.320 END TEST thread 00:09:55.320 ************************************ 00:09:55.320 13:41:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:55.320 13:41:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:55.320 13:41:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.320 13:41:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.320 13:41:26 -- common/autotest_common.sh@10 -- # set +x 00:09:55.320 ************************************ 00:09:55.320 START TEST app_cmdline 00:09:55.320 ************************************ 00:09:55.320 13:41:26 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:55.320 * Looking for test storage... 00:09:55.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:55.320 13:41:26 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:55.320 13:41:26 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:09:55.320 13:41:26 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:55.320 13:41:26 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:55.320 13:41:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.320 13:41:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.320 13:41:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.320 13:41:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.321 13:41:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:55.321 13:41:26 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.321 13:41:26 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:55.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.321 --rc genhtml_branch_coverage=1 00:09:55.321 --rc genhtml_function_coverage=1 00:09:55.321 --rc genhtml_legend=1 00:09:55.321 --rc geninfo_all_blocks=1 00:09:55.321 --rc geninfo_unexecuted_blocks=1 00:09:55.321 00:09:55.321 ' 00:09:55.321 13:41:26 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:55.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.321 --rc genhtml_branch_coverage=1 00:09:55.321 --rc genhtml_function_coverage=1 00:09:55.321 --rc genhtml_legend=1 00:09:55.321 --rc geninfo_all_blocks=1 00:09:55.321 --rc geninfo_unexecuted_blocks=1 00:09:55.321 00:09:55.321 ' 00:09:55.321 13:41:26 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:55.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.321 --rc genhtml_branch_coverage=1 00:09:55.321 --rc genhtml_function_coverage=1 00:09:55.321 --rc genhtml_legend=1 00:09:55.321 --rc geninfo_all_blocks=1 00:09:55.321 --rc geninfo_unexecuted_blocks=1 00:09:55.321 00:09:55.321 ' 00:09:55.321 13:41:26 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:55.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.321 --rc genhtml_branch_coverage=1 00:09:55.321 --rc genhtml_function_coverage=1 00:09:55.321 --rc genhtml_legend=1 00:09:55.321 --rc geninfo_all_blocks=1 00:09:55.321 --rc geninfo_unexecuted_blocks=1 00:09:55.321 00:09:55.321 ' 00:09:55.321 13:41:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:55.321 13:41:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2127350 00:09:55.321 13:41:26 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:55.321 13:41:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2127350 00:09:55.321 13:41:26 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2127350 ']' 00:09:55.321 13:41:26 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.321 13:41:26 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.321 13:41:26 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.321 13:41:26 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.321 13:41:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:55.321 [2024-12-05 13:41:26.669323] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:09:55.321 [2024-12-05 13:41:26.669449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127350 ] 00:09:55.321 [2024-12-05 13:41:26.733609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.321 [2024-12-05 13:41:26.787813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.581 13:41:27 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.581 13:41:27 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:55.581 13:41:27 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:55.839 { 00:09:55.839 "version": "SPDK v25.01-pre git sha1 62083ef48", 00:09:55.839 "fields": { 00:09:55.839 "major": 25, 00:09:55.839 "minor": 1, 00:09:55.839 "patch": 0, 00:09:55.839 "suffix": "-pre", 00:09:55.839 "commit": "62083ef48" 00:09:55.839 } 00:09:55.839 } 00:09:55.839 13:41:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:55.839 13:41:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:55.839 13:41:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:55.839 13:41:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:55.839 13:41:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:55.839 13:41:27 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.839 13:41:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:55.839 13:41:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:55.839 13:41:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:55.839 13:41:27 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.839 13:41:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:55.839 13:41:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:55.839 13:41:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:55.839 13:41:27 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:55.839 13:41:27 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:55.839 13:41:27 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:55.839 13:41:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:55.839 13:41:27 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:55.839 13:41:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:55.839 13:41:27 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:55.839 13:41:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:55.839 13:41:27 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:55.839 13:41:27 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:55.839 13:41:27 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:56.100 request: 00:09:56.100 { 00:09:56.100 "method": "env_dpdk_get_mem_stats", 00:09:56.100 "req_id": 1 00:09:56.100 } 00:09:56.100 Got JSON-RPC error response 00:09:56.100 response: 00:09:56.100 { 00:09:56.100 "code": -32601, 00:09:56.100 "message": "Method not found" 00:09:56.100 } 00:09:56.408 13:41:27 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:56.408 13:41:27 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:56.408 13:41:27 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:56.408 13:41:27 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:56.408 13:41:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2127350 00:09:56.408 13:41:27 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2127350 ']' 00:09:56.408 13:41:27 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2127350 00:09:56.408 13:41:27 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:56.408 13:41:27 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.408 13:41:27 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2127350 00:09:56.408 13:41:27 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.408 13:41:27 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.408 13:41:27 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2127350' 00:09:56.408 killing process with pid 2127350 00:09:56.408 13:41:27 app_cmdline -- common/autotest_common.sh@973 -- # kill 2127350 00:09:56.408 13:41:27 app_cmdline -- common/autotest_common.sh@978 -- # wait 2127350 00:09:56.686 00:09:56.686 real 0m1.607s 00:09:56.686 user 0m1.974s 00:09:56.686 sys 0m0.489s 00:09:56.686 13:41:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.686 13:41:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:56.686 ************************************ 00:09:56.686 END TEST app_cmdline 00:09:56.686 ************************************ 00:09:56.686 13:41:28 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:56.686 13:41:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.686 13:41:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.686 13:41:28 -- common/autotest_common.sh@10 -- # set +x 00:09:56.686 ************************************ 00:09:56.686 START TEST version 00:09:56.686 ************************************ 00:09:56.686 13:41:28 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:56.686 * Looking for test storage... 00:09:56.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:56.686 13:41:28 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:56.686 13:41:28 version -- common/autotest_common.sh@1711 -- # lcov --version 00:09:56.686 13:41:28 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:56.944 13:41:28 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:56.944 13:41:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.944 13:41:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.944 13:41:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.944 13:41:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.944 13:41:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.944 13:41:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.944 13:41:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.944 13:41:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.944 13:41:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.944 13:41:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.944 13:41:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.944 13:41:28 version -- scripts/common.sh@344 -- # case "$op" in 00:09:56.944 13:41:28 version -- scripts/common.sh@345 -- # : 1 00:09:56.944 13:41:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.944 13:41:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.944 13:41:28 version -- scripts/common.sh@365 -- # decimal 1 00:09:56.944 13:41:28 version -- scripts/common.sh@353 -- # local d=1 00:09:56.944 13:41:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.944 13:41:28 version -- scripts/common.sh@355 -- # echo 1 00:09:56.944 13:41:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.944 13:41:28 version -- scripts/common.sh@366 -- # decimal 2 00:09:56.944 13:41:28 version -- scripts/common.sh@353 -- # local d=2 00:09:56.944 13:41:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.944 13:41:28 version -- scripts/common.sh@355 -- # echo 2 00:09:56.944 13:41:28 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.944 13:41:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.944 13:41:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.944 13:41:28 version -- scripts/common.sh@368 -- # return 0 00:09:56.945 13:41:28 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.945 13:41:28 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:56.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.945 --rc genhtml_branch_coverage=1 00:09:56.945 --rc genhtml_function_coverage=1 00:09:56.945 --rc genhtml_legend=1 00:09:56.945 --rc geninfo_all_blocks=1 00:09:56.945 --rc geninfo_unexecuted_blocks=1 00:09:56.945 00:09:56.945 ' 00:09:56.945 13:41:28 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:56.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.945 --rc genhtml_branch_coverage=1 00:09:56.945 --rc genhtml_function_coverage=1 00:09:56.945 --rc genhtml_legend=1 00:09:56.945 --rc geninfo_all_blocks=1 00:09:56.945 --rc geninfo_unexecuted_blocks=1 00:09:56.945 00:09:56.945 ' 00:09:56.945 13:41:28 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:56.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.945 --rc genhtml_branch_coverage=1 00:09:56.945 --rc genhtml_function_coverage=1 00:09:56.945 --rc genhtml_legend=1 00:09:56.945 --rc geninfo_all_blocks=1 00:09:56.945 --rc geninfo_unexecuted_blocks=1 00:09:56.945 00:09:56.945 ' 00:09:56.945 13:41:28 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:56.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.945 --rc genhtml_branch_coverage=1 00:09:56.945 --rc genhtml_function_coverage=1 00:09:56.945 --rc genhtml_legend=1 00:09:56.945 --rc geninfo_all_blocks=1 00:09:56.945 --rc geninfo_unexecuted_blocks=1 00:09:56.945 00:09:56.945 ' 00:09:56.945 13:41:28 version -- app/version.sh@17 -- # get_header_version major 00:09:56.945 13:41:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:56.945 13:41:28 version -- app/version.sh@14 -- # cut -f2 00:09:56.945 13:41:28 version -- app/version.sh@14 -- # tr -d '"' 00:09:56.945 13:41:28 version -- app/version.sh@17 -- # major=25 00:09:56.945 13:41:28 version -- app/version.sh@18 -- # get_header_version minor 00:09:56.945 13:41:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:56.945 13:41:28 version -- app/version.sh@14 -- # cut -f2 00:09:56.945 13:41:28 version -- app/version.sh@14 -- # tr -d '"' 00:09:56.945 13:41:28 version -- app/version.sh@18 -- # minor=1 00:09:56.945 13:41:28 version -- app/version.sh@19 -- # get_header_version patch 00:09:56.945 13:41:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:56.945 13:41:28 version -- app/version.sh@14 -- # cut -f2 00:09:56.945 13:41:28 version -- app/version.sh@14 -- # tr -d '"' 00:09:56.945 13:41:28 version -- app/version.sh@19 -- # patch=0 00:09:56.945 13:41:28 version -- app/version.sh@20 -- # get_header_version suffix 00:09:56.945 13:41:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:56.945 13:41:28 version -- app/version.sh@14 -- # cut -f2 00:09:56.945 13:41:28 version -- app/version.sh@14 -- # tr -d '"' 00:09:56.945 13:41:28 version -- app/version.sh@20 -- # suffix=-pre 00:09:56.945 13:41:28 version -- app/version.sh@22 -- # version=25.1 00:09:56.945 13:41:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:56.945 13:41:28 version -- app/version.sh@28 -- # version=25.1rc0 00:09:56.945 13:41:28 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:56.945 13:41:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:56.945 13:41:28 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:56.945 13:41:28 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:56.945 00:09:56.945 real 0m0.198s 00:09:56.945 user 0m0.131s 00:09:56.945 sys 0m0.093s 00:09:56.945 13:41:28 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.945 13:41:28 version -- common/autotest_common.sh@10 -- # set +x 00:09:56.945 ************************************ 00:09:56.945 END TEST version 00:09:56.945 ************************************ 00:09:56.945 13:41:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:56.945 13:41:28 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:56.945 13:41:28 -- spdk/autotest.sh@194 -- # uname -s 00:09:56.945 13:41:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:56.945 13:41:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:56.945 13:41:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:56.945 13:41:28 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:56.945 13:41:28 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:56.945 13:41:28 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:56.945 13:41:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.945 13:41:28 -- common/autotest_common.sh@10 -- # set +x 00:09:56.945 13:41:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:56.945 13:41:28 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:56.945 13:41:28 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:56.945 13:41:28 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:56.945 13:41:28 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:56.945 13:41:28 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:56.945 13:41:28 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:56.945 13:41:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.945 13:41:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.945 13:41:28 -- common/autotest_common.sh@10 -- # set +x 00:09:56.945 ************************************ 00:09:56.945 START TEST nvmf_tcp 00:09:56.945 ************************************ 00:09:56.945 13:41:28 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:56.945 * Looking for test storage... 00:09:56.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:56.945 13:41:28 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:56.945 13:41:28 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:56.945 13:41:28 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:57.203 13:41:28 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.203 13:41:28 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:57.203 13:41:28 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.203 13:41:28 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:57.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.204 --rc genhtml_branch_coverage=1 00:09:57.204 --rc genhtml_function_coverage=1 00:09:57.204 --rc genhtml_legend=1 00:09:57.204 --rc geninfo_all_blocks=1 00:09:57.204 --rc geninfo_unexecuted_blocks=1 00:09:57.204 00:09:57.204 ' 00:09:57.204 13:41:28 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:57.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.204 --rc genhtml_branch_coverage=1 00:09:57.204 --rc genhtml_function_coverage=1 00:09:57.204 --rc genhtml_legend=1 00:09:57.204 --rc geninfo_all_blocks=1 00:09:57.204 --rc geninfo_unexecuted_blocks=1 00:09:57.204 00:09:57.204 ' 00:09:57.204 13:41:28 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:57.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.204 --rc genhtml_branch_coverage=1 00:09:57.204 --rc genhtml_function_coverage=1 00:09:57.204 --rc genhtml_legend=1 00:09:57.204 --rc geninfo_all_blocks=1 00:09:57.204 --rc geninfo_unexecuted_blocks=1 00:09:57.204 00:09:57.204 ' 00:09:57.204 13:41:28 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:57.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.204 --rc genhtml_branch_coverage=1 00:09:57.204 --rc genhtml_function_coverage=1 00:09:57.204 --rc genhtml_legend=1 00:09:57.204 --rc geninfo_all_blocks=1 00:09:57.204 --rc geninfo_unexecuted_blocks=1 00:09:57.204 00:09:57.204 ' 00:09:57.204 13:41:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:57.204 13:41:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:57.204 13:41:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:57.204 13:41:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.204 13:41:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.204 13:41:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:57.204 ************************************ 00:09:57.204 START TEST nvmf_target_core 00:09:57.204 ************************************ 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:57.204 * Looking for test storage... 00:09:57.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:57.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.204 --rc genhtml_branch_coverage=1 00:09:57.204 --rc genhtml_function_coverage=1 00:09:57.204 --rc genhtml_legend=1 00:09:57.204 --rc geninfo_all_blocks=1 00:09:57.204 --rc geninfo_unexecuted_blocks=1 00:09:57.204 00:09:57.204 ' 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:57.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.204 --rc genhtml_branch_coverage=1 00:09:57.204 --rc genhtml_function_coverage=1 00:09:57.204 --rc genhtml_legend=1 00:09:57.204 --rc geninfo_all_blocks=1 00:09:57.204 --rc geninfo_unexecuted_blocks=1 00:09:57.204 00:09:57.204 ' 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:57.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.204 --rc genhtml_branch_coverage=1 00:09:57.204 --rc genhtml_function_coverage=1 00:09:57.204 --rc genhtml_legend=1 00:09:57.204 --rc geninfo_all_blocks=1 00:09:57.204 --rc geninfo_unexecuted_blocks=1 00:09:57.204 00:09:57.204 ' 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:57.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.204 --rc genhtml_branch_coverage=1 00:09:57.204 --rc genhtml_function_coverage=1 00:09:57.204 --rc genhtml_legend=1 00:09:57.204 --rc geninfo_all_blocks=1 00:09:57.204 --rc geninfo_unexecuted_blocks=1 00:09:57.204 00:09:57.204 ' 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.204 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.464 ************************************ 00:09:57.464 START TEST nvmf_abort 00:09:57.464 ************************************ 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:57.464 * Looking for test storage... 00:09:57.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.464 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:57.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.465 --rc genhtml_branch_coverage=1 00:09:57.465 --rc genhtml_function_coverage=1 00:09:57.465 --rc genhtml_legend=1 00:09:57.465 --rc geninfo_all_blocks=1 00:09:57.465 --rc geninfo_unexecuted_blocks=1 00:09:57.465 00:09:57.465 ' 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:57.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.465 --rc genhtml_branch_coverage=1 00:09:57.465 --rc genhtml_function_coverage=1 00:09:57.465 --rc genhtml_legend=1 00:09:57.465 --rc geninfo_all_blocks=1 00:09:57.465 --rc geninfo_unexecuted_blocks=1 00:09:57.465 00:09:57.465 ' 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:57.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.465 --rc genhtml_branch_coverage=1 00:09:57.465 --rc genhtml_function_coverage=1 00:09:57.465 --rc genhtml_legend=1 00:09:57.465 --rc geninfo_all_blocks=1 00:09:57.465 --rc geninfo_unexecuted_blocks=1 00:09:57.465 00:09:57.465 ' 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:57.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.465 --rc genhtml_branch_coverage=1 00:09:57.465 --rc genhtml_function_coverage=1 00:09:57.465 --rc genhtml_legend=1 00:09:57.465 --rc geninfo_all_blocks=1 00:09:57.465 --rc geninfo_unexecuted_blocks=1 00:09:57.465 00:09:57.465 ' 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.465 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:00.000 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:00.000 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:00.000 Found net devices under 0000:09:00.0: cvl_0_0 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:00.000 Found net devices under 0000:09:00.1: cvl_0_1 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:00.000 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:00.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:10:00.001 00:10:00.001 --- 10.0.0.2 ping statistics --- 00:10:00.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.001 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:10:00.001 00:10:00.001 --- 10.0.0.1 ping statistics --- 00:10:00.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.001 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2129444 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2129444 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2129444 ']' 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.001 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.001 [2024-12-05 13:41:31.307114] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:10:00.001 [2024-12-05 13:41:31.307211] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.001 [2024-12-05 13:41:31.383154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.001 [2024-12-05 13:41:31.441735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.001 [2024-12-05 13:41:31.441804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.001 [2024-12-05 13:41:31.441818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.001 [2024-12-05 13:41:31.441843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.001 [2024-12-05 13:41:31.441853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.001 [2024-12-05 13:41:31.443465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.001 [2024-12-05 13:41:31.447511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.001 [2024-12-05 13:41:31.447517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.261 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.261 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:10:00.261 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:00.261 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:00.261 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.261 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.261 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:00.261 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.261 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.261 [2024-12-05 13:41:31.601495] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.261 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.261 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:00.261 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.261 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.262 Malloc0 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.262 Delay0 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.262 [2024-12-05 13:41:31.675973] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.262 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:00.521 [2024-12-05 13:41:31.831492] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:03.073 Initializing NVMe Controllers 00:10:03.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:03.073 controller IO queue size 128 less than required 00:10:03.073 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:03.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:03.073 Initialization complete. Launching workers. 00:10:03.073 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28103 00:10:03.073 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28164, failed to submit 62 00:10:03.073 success 28107, unsuccessful 57, failed 0 00:10:03.073 13:41:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:03.073 13:41:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.073 13:41:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:03.073 13:41:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.073 13:41:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:03.073 13:41:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:03.073 13:41:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.073 13:41:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.073 rmmod nvme_tcp 00:10:03.073 rmmod nvme_fabrics 00:10:03.073 rmmod nvme_keyring 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2129444 ']' 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2129444 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2129444 ']' 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2129444 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2129444 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2129444' 00:10:03.073 killing process with pid 2129444 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2129444 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2129444 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.073 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.984 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:04.984 00:10:04.984 real 0m7.619s 00:10:04.984 user 0m11.229s 00:10:04.984 sys 0m2.645s 00:10:04.984 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.984 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:04.984 ************************************ 00:10:04.984 END TEST nvmf_abort 00:10:04.984 ************************************ 00:10:04.984 13:41:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:04.984 13:41:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.984 13:41:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.984 13:41:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.984 ************************************ 00:10:04.984 START TEST nvmf_ns_hotplug_stress 00:10:04.984 ************************************ 00:10:04.984 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:04.984 * Looking for test storage... 00:10:04.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.984 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:04.984 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:10:04.984 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:05.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.244 --rc genhtml_branch_coverage=1 00:10:05.244 --rc genhtml_function_coverage=1 00:10:05.244 --rc genhtml_legend=1 00:10:05.244 --rc geninfo_all_blocks=1 00:10:05.244 --rc geninfo_unexecuted_blocks=1 00:10:05.244 00:10:05.244 ' 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:05.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.244 --rc genhtml_branch_coverage=1 00:10:05.244 --rc genhtml_function_coverage=1 00:10:05.244 --rc genhtml_legend=1 00:10:05.244 --rc geninfo_all_blocks=1 00:10:05.244 --rc geninfo_unexecuted_blocks=1 00:10:05.244 00:10:05.244 ' 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:05.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.244 --rc genhtml_branch_coverage=1 00:10:05.244 --rc genhtml_function_coverage=1 00:10:05.244 --rc genhtml_legend=1 00:10:05.244 --rc geninfo_all_blocks=1 00:10:05.244 --rc geninfo_unexecuted_blocks=1 00:10:05.244 00:10:05.244 ' 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:05.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.244 --rc genhtml_branch_coverage=1 00:10:05.244 --rc genhtml_function_coverage=1 00:10:05.244 --rc genhtml_legend=1 00:10:05.244 --rc geninfo_all_blocks=1 00:10:05.244 --rc geninfo_unexecuted_blocks=1 00:10:05.244 00:10:05.244 ' 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.244 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.245 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:07.218 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:07.218 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:07.218 Found net devices under 0000:09:00.0: cvl_0_0 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:07.218 Found net devices under 0000:09:00.1: cvl_0_1 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.218 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.478 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.478 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.478 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.478 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.478 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.478 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.478 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.478 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:10:07.479 00:10:07.479 --- 10.0.0.2 ping statistics --- 00:10:07.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.479 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:10:07.479 00:10:07.479 --- 10.0.0.1 ping statistics --- 00:10:07.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.479 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2131802 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2131802 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2131802 ']' 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.479 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.479 [2024-12-05 13:41:38.926037] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:10:07.479 [2024-12-05 13:41:38.926116] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.479 [2024-12-05 13:41:38.996700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.737 [2024-12-05 13:41:39.055200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.737 [2024-12-05 13:41:39.055255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.737 [2024-12-05 13:41:39.055283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.737 [2024-12-05 13:41:39.055294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.737 [2024-12-05 13:41:39.055305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.737 [2024-12-05 13:41:39.057022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.737 [2024-12-05 13:41:39.057086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.737 [2024-12-05 13:41:39.057090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.737 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.737 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:10:07.737 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.737 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.737 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.737 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.737 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:07.737 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:07.995 [2024-12-05 13:41:39.459586] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.995 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:08.252 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.510 [2024-12-05 13:41:40.022370] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.769 13:41:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:09.026 13:41:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:09.284 Malloc0 00:10:09.284 13:41:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:09.542 Delay0 00:10:09.542 13:41:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.799 13:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:10.058 NULL1 00:10:10.058 13:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:10.316 13:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2132105 00:10:10.316 13:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:10.316 13:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:10.316 13:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.574 13:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.833 13:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:10.833 13:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:11.091 true 00:10:11.091 13:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:11.091 13:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.349 13:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.608 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:11.608 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:11.865 true 00:10:11.865 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:11.865 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.123 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.381 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:12.381 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:12.638 true 00:10:12.638 13:41:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:12.638 13:41:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.574 Read completed with error (sct=0, sc=11) 00:10:13.574 13:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.831 13:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:13.831 13:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:14.397 true 00:10:14.397 13:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:14.397 13:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.397 13:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.656 13:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:14.656 13:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:14.915 true 00:10:14.915 13:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:14.915 13:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.851 13:41:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.108 13:41:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:16.109 13:41:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:16.366 true 00:10:16.366 13:41:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:16.366 13:41:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.624 13:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.881 13:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:16.882 13:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:17.139 true 00:10:17.139 13:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:17.139 13:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.074 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.333 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:18.333 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:18.591 true 00:10:18.591 13:41:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:18.591 13:41:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.849 13:41:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.108 13:41:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:19.108 13:41:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:19.366 true 00:10:19.366 13:41:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:19.366 13:41:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.298 13:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.554 13:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:20.554 13:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:20.812 true 00:10:20.812 13:41:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:20.812 13:41:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.070 13:41:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.328 13:41:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:21.328 13:41:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:21.585 true 00:10:21.585 13:41:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:21.585 13:41:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.841 13:41:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.097 13:41:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:22.097 13:41:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:22.354 true 00:10:22.354 13:41:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:22.354 13:41:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.315 13:41:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.573 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:23.573 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:23.830 true 00:10:23.830 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:23.830 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.395 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.395 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:24.395 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:24.654 true 00:10:24.654 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:24.654 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.593 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.851 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:25.851 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:26.108 true 00:10:26.108 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:26.108 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.366 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.624 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:26.624 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:26.882 true 00:10:26.882 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:26.882 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.139 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.397 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:27.397 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:27.655 true 00:10:27.655 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:27.655 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.594 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.853 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:28.853 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:29.111 true 00:10:29.111 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:29.111 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.368 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.933 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:29.933 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:29.933 true 00:10:29.933 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:29.933 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.497 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.497 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:30.497 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:30.757 true 00:10:31.015 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:31.015 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.953 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.953 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:31.953 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:32.211 true 00:10:32.211 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:32.212 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.469 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.726 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:32.726 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:32.986 true 00:10:33.245 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:33.245 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.182 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.182 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:34.182 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:34.440 true 00:10:34.440 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:34.440 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.697 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.955 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:34.955 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:35.213 true 00:10:35.213 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:35.213 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.148 13:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.405 13:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:36.405 13:42:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:36.663 true 00:10:36.663 13:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:36.663 13:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.921 13:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.179 13:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:37.179 13:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:37.743 true 00:10:37.743 13:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:37.743 13:42:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.743 13:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.307 13:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:38.307 13:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:38.307 true 00:10:38.307 13:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:38.307 13:42:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.242 13:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.500 13:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:39.500 13:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:39.757 true 00:10:39.757 13:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:39.757 13:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.015 13:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.272 13:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:40.272 13:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:40.541 true 00:10:40.541 13:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:40.541 13:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.799 Initializing NVMe Controllers 00:10:40.799 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:40.799 Controller IO queue size 128, less than required. 00:10:40.799 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:40.799 Controller IO queue size 128, less than required. 00:10:40.799 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:40.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:40.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:40.799 Initialization complete. Launching workers. 00:10:40.799 ======================================================== 00:10:40.799 Latency(us) 00:10:40.799 Device Information : IOPS MiB/s Average min max 00:10:40.799 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 564.53 0.28 93797.88 2869.55 1086282.97 00:10:40.799 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8632.46 4.22 14785.53 3281.15 450407.26 00:10:40.799 ======================================================== 00:10:40.799 Total : 9196.99 4.49 19635.43 2869.55 1086282.97 00:10:40.799 00:10:40.799 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.057 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:41.057 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:41.314 true 00:10:41.314 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2132105 00:10:41.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2132105) - No such process 00:10:41.314 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2132105 00:10:41.315 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.572 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:41.830 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:41.830 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:41.830 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:41.830 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:41.830 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:42.091 null0 00:10:42.091 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:42.351 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:42.351 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:42.351 null1 00:10:42.610 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:42.610 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:42.610 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:42.610 null2 00:10:42.871 13:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:42.871 13:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:42.871 13:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:43.128 null3 00:10:43.128 13:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:43.128 13:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:43.128 13:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:43.386 null4 00:10:43.386 13:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:43.386 13:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:43.386 13:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:43.644 null5 00:10:43.644 13:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:43.644 13:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:43.644 13:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:43.901 null6 00:10:43.901 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:43.901 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:43.901 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:44.159 null7 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.159 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.160 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:44.160 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:44.160 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:44.160 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:44.160 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:44.160 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:44.160 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:44.160 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2136915 2136916 2136918 2136920 2136922 2136924 2136926 2136928 00:10:44.160 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.160 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:44.417 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:44.417 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:44.417 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:44.417 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:44.417 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:44.417 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.417 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:44.417 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.674 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:44.931 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:44.931 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:44.931 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:44.931 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:44.931 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:44.931 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.931 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:44.931 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:45.496 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.496 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.496 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:45.496 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.497 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:45.497 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:45.755 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:45.755 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:45.755 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:45.755 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:45.755 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.755 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:45.755 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.012 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:46.269 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:46.269 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:46.269 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:46.269 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:46.269 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:46.269 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:46.269 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.269 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.527 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:46.786 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:46.786 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:46.786 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:46.786 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:46.786 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.786 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:46.786 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:46.786 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.080 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:47.362 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:47.362 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:47.362 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:47.362 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:47.362 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:47.362 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:47.362 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.362 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.620 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.620 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.620 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.877 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.134 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:48.135 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:48.135 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.135 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:48.135 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.135 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.135 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.135 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.392 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.649 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:48.649 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:48.649 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.649 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.649 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:48.649 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.649 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:48.649 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.907 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:49.165 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:49.165 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:49.165 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.165 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:49.165 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:49.165 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.165 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:49.165 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.423 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.423 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.423 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:49.423 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.423 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.423 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:49.423 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.423 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.423 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.685 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:49.943 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:49.943 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:49.943 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:49.943 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.943 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:49.943 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.943 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:49.943 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.201 rmmod nvme_tcp 00:10:50.201 rmmod nvme_fabrics 00:10:50.201 rmmod nvme_keyring 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2131802 ']' 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2131802 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2131802 ']' 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2131802 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2131802 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2131802' 00:10:50.201 killing process with pid 2131802 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2131802 00:10:50.201 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2131802 00:10:50.461 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.461 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.461 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.461 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:10:50.461 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:10:50.461 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.461 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.461 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.461 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:50.461 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.461 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.461 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.000 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:53.000 00:10:53.000 real 0m47.511s 00:10:53.000 user 3m41.946s 00:10:53.000 sys 0m16.138s 00:10:53.000 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.000 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.000 ************************************ 00:10:53.000 END TEST nvmf_ns_hotplug_stress 00:10:53.000 ************************************ 00:10:53.000 13:42:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:53.000 13:42:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.000 13:42:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.000 13:42:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:53.000 ************************************ 00:10:53.000 START TEST nvmf_delete_subsystem 00:10:53.000 ************************************ 00:10:53.000 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:53.000 * Looking for test storage... 00:10:53.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.000 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:53.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.001 --rc genhtml_branch_coverage=1 00:10:53.001 --rc genhtml_function_coverage=1 00:10:53.001 --rc genhtml_legend=1 00:10:53.001 --rc geninfo_all_blocks=1 00:10:53.001 --rc geninfo_unexecuted_blocks=1 00:10:53.001 00:10:53.001 ' 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:53.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.001 --rc genhtml_branch_coverage=1 00:10:53.001 --rc genhtml_function_coverage=1 00:10:53.001 --rc genhtml_legend=1 00:10:53.001 --rc geninfo_all_blocks=1 00:10:53.001 --rc geninfo_unexecuted_blocks=1 00:10:53.001 00:10:53.001 ' 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:53.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.001 --rc genhtml_branch_coverage=1 00:10:53.001 --rc genhtml_function_coverage=1 00:10:53.001 --rc genhtml_legend=1 00:10:53.001 --rc geninfo_all_blocks=1 00:10:53.001 --rc geninfo_unexecuted_blocks=1 00:10:53.001 00:10:53.001 ' 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:53.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.001 --rc genhtml_branch_coverage=1 00:10:53.001 --rc genhtml_function_coverage=1 00:10:53.001 --rc genhtml_legend=1 00:10:53.001 --rc geninfo_all_blocks=1 00:10:53.001 --rc geninfo_unexecuted_blocks=1 00:10:53.001 00:10:53.001 ' 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:53.001 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.903 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:54.904 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:54.904 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:54.904 Found net devices under 0000:09:00.0: cvl_0_0 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:54.904 Found net devices under 0000:09:00.1: cvl_0_1 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:54.904 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.163 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.163 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.163 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:55.163 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:55.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:10:55.163 00:10:55.163 --- 10.0.0.2 ping statistics --- 00:10:55.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.163 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:10:55.163 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:10:55.163 00:10:55.163 --- 10.0.0.1 ping statistics --- 00:10:55.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.163 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:10:55.163 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.163 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:10:55.163 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:55.163 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.163 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:55.163 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:55.163 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.163 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:55.163 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:55.164 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:55.164 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.164 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.164 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.164 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2139710 00:10:55.164 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:55.164 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2139710 00:10:55.164 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2139710 ']' 00:10:55.164 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.164 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.164 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.164 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.164 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.164 [2024-12-05 13:42:26.539910] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:10:55.164 [2024-12-05 13:42:26.540003] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.164 [2024-12-05 13:42:26.614335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:55.164 [2024-12-05 13:42:26.670597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.164 [2024-12-05 13:42:26.670652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.164 [2024-12-05 13:42:26.670681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.164 [2024-12-05 13:42:26.670693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.164 [2024-12-05 13:42:26.670703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.164 [2024-12-05 13:42:26.675442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.164 [2024-12-05 13:42:26.675449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 [2024-12-05 13:42:26.817650] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 [2024-12-05 13:42:26.833890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 NULL1 00:10:55.421 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.422 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:55.422 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.422 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.422 Delay0 00:10:55.422 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.422 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.422 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.422 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.422 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.422 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2139847 00:10:55.422 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:55.422 13:42:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:55.422 [2024-12-05 13:42:26.918637] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:57.957 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.957 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.957 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 [2024-12-05 13:42:29.039728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58f400d4d0 is same with the state(6) to be set 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 starting I/O failed: -6 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.957 Write completed with error (sct=0, sc=8) 00:10:57.957 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 starting I/O failed: -6 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 starting I/O failed: -6 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 starting I/O failed: -6 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 [2024-12-05 13:42:29.040316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9884a0 is same with the state(6) to be set 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 [2024-12-05 13:42:29.040781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58f4000c40 is same with the state(6) to be set 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Read completed with error (sct=0, sc=8) 00:10:57.958 Write completed with error (sct=0, sc=8) 00:10:58.527 [2024-12-05 13:42:30.014198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9899b0 is same with the state(6) to be set 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Write completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Write completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Write completed with error (sct=0, sc=8) 00:10:58.527 Write completed with error (sct=0, sc=8) 00:10:58.527 Write completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 [2024-12-05 13:42:30.042038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58f400d800 is same with the state(6) to be set 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Write completed with error (sct=0, sc=8) 00:10:58.527 Write completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Write completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Read completed with error (sct=0, sc=8) 00:10:58.527 Write completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 [2024-12-05 13:42:30.042315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9882c0 is same with the state(6) to be set 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 [2024-12-05 13:42:30.042544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988680 is same with the state(6) to be set 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Write completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 Read completed with error (sct=0, sc=8) 00:10:58.528 [2024-12-05 13:42:30.043038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58f400d020 is same with the state(6) to be set 00:10:58.528 Initializing NVMe Controllers 00:10:58.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:58.528 Controller IO queue size 128, less than required. 00:10:58.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:58.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:58.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:58.528 Initialization complete. Launching workers. 00:10:58.528 ======================================================== 00:10:58.528 Latency(us) 00:10:58.528 Device Information : IOPS MiB/s Average min max 00:10:58.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.27 0.08 898592.01 588.24 2002220.49 00:10:58.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.83 0.08 906542.54 659.59 1042936.82 00:10:58.528 ======================================================== 00:10:58.528 Total : 337.10 0.16 902479.46 588.24 2002220.49 00:10:58.528 00:10:58.528 [2024-12-05 13:42:30.043872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9899b0 (9): Bad file descriptor 00:10:58.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:58.528 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.528 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:58.528 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2139847 00:10:58.528 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2139847 00:10:59.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2139847) - No such process 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2139847 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2139847 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2139847 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.097 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.097 [2024-12-05 13:42:30.568515] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.098 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.098 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.098 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.098 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.098 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.098 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2140262 00:10:59.098 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:59.098 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2140262 00:10:59.098 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:59.098 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:59.356 [2024-12-05 13:42:30.642062] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:59.616 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:59.616 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2140262 00:10:59.616 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:00.186 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:00.186 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2140262 00:11:00.186 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:00.754 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:00.754 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2140262 00:11:00.754 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:01.322 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:01.322 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2140262 00:11:01.322 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:01.582 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:01.582 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2140262 00:11:01.582 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:02.149 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:02.149 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2140262 00:11:02.149 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:02.484 Initializing NVMe Controllers 00:11:02.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:02.484 Controller IO queue size 128, less than required. 00:11:02.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:02.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:02.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:02.484 Initialization complete. Launching workers. 00:11:02.484 ======================================================== 00:11:02.484 Latency(us) 00:11:02.484 Device Information : IOPS MiB/s Average min max 00:11:02.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006271.74 1000172.66 1041644.62 00:11:02.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004259.97 1000142.09 1040967.46 00:11:02.484 ======================================================== 00:11:02.484 Total : 256.00 0.12 1005265.85 1000142.09 1041644.62 00:11:02.484 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2140262 00:11:02.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2140262) - No such process 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2140262 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:02.743 rmmod nvme_tcp 00:11:02.743 rmmod nvme_fabrics 00:11:02.743 rmmod nvme_keyring 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2139710 ']' 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2139710 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2139710 ']' 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2139710 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2139710 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2139710' 00:11:02.743 killing process with pid 2139710 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2139710 00:11:02.743 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2139710 00:11:03.002 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.002 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:03.002 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:03.002 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:11:03.002 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:11:03.002 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:03.002 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:03.002 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.002 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:03.002 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.002 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.002 13:42:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.539 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:05.539 00:11:05.539 real 0m12.463s 00:11:05.539 user 0m27.888s 00:11:05.539 sys 0m3.083s 00:11:05.539 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.539 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:05.539 ************************************ 00:11:05.539 END TEST nvmf_delete_subsystem 00:11:05.539 ************************************ 00:11:05.539 13:42:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:05.539 13:42:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:05.539 13:42:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.539 13:42:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:05.539 ************************************ 00:11:05.539 START TEST nvmf_host_management 00:11:05.539 ************************************ 00:11:05.539 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:05.539 * Looking for test storage... 00:11:05.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.539 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:05.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.540 --rc genhtml_branch_coverage=1 00:11:05.540 --rc genhtml_function_coverage=1 00:11:05.540 --rc genhtml_legend=1 00:11:05.540 --rc geninfo_all_blocks=1 00:11:05.540 --rc geninfo_unexecuted_blocks=1 00:11:05.540 00:11:05.540 ' 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:05.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.540 --rc genhtml_branch_coverage=1 00:11:05.540 --rc genhtml_function_coverage=1 00:11:05.540 --rc genhtml_legend=1 00:11:05.540 --rc geninfo_all_blocks=1 00:11:05.540 --rc geninfo_unexecuted_blocks=1 00:11:05.540 00:11:05.540 ' 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:05.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.540 --rc genhtml_branch_coverage=1 00:11:05.540 --rc genhtml_function_coverage=1 00:11:05.540 --rc genhtml_legend=1 00:11:05.540 --rc geninfo_all_blocks=1 00:11:05.540 --rc geninfo_unexecuted_blocks=1 00:11:05.540 00:11:05.540 ' 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:05.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.540 --rc genhtml_branch_coverage=1 00:11:05.540 --rc genhtml_function_coverage=1 00:11:05.540 --rc genhtml_legend=1 00:11:05.540 --rc geninfo_all_blocks=1 00:11:05.540 --rc geninfo_unexecuted_blocks=1 00:11:05.540 00:11:05.540 ' 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:05.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:11:05.540 13:42:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.441 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.441 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:11:07.441 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:07.441 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:07.441 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:07.441 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:07.441 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:07.441 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:11:07.441 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:07.441 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:11:07.441 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:11:07.441 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:11:07.441 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:07.442 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:07.442 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:07.442 Found net devices under 0000:09:00.0: cvl_0_0 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:07.442 Found net devices under 0000:09:00.1: cvl_0_1 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:07.442 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.443 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.443 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.443 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.443 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:07.443 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.702 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.702 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.702 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:07.702 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:07.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:11:07.702 00:11:07.702 --- 10.0.0.2 ping statistics --- 00:11:07.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.702 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:11:07.702 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:11:07.702 00:11:07.702 --- 10.0.0.1 ping statistics --- 00:11:07.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.702 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:11:07.702 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.702 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:11:07.702 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:07.702 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.702 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:07.702 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:07.702 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.702 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:07.702 13:42:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2142616 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2142616 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2142616 ']' 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.702 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.702 [2024-12-05 13:42:39.068188] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:07.703 [2024-12-05 13:42:39.068291] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.703 [2024-12-05 13:42:39.140595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.703 [2024-12-05 13:42:39.200465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.703 [2024-12-05 13:42:39.200528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.703 [2024-12-05 13:42:39.200543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.703 [2024-12-05 13:42:39.200555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.703 [2024-12-05 13:42:39.200570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.703 [2024-12-05 13:42:39.202113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.703 [2024-12-05 13:42:39.202177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.703 [2024-12-05 13:42:39.202244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:07.703 [2024-12-05 13:42:39.202247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.960 [2024-12-05 13:42:39.351619] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.960 Malloc0 00:11:07.960 [2024-12-05 13:42:39.424595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2142781 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2142781 /var/tmp/bdevperf.sock 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2142781 ']' 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:07.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:07.960 { 00:11:07.960 "params": { 00:11:07.960 "name": "Nvme$subsystem", 00:11:07.960 "trtype": "$TEST_TRANSPORT", 00:11:07.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:07.960 "adrfam": "ipv4", 00:11:07.960 "trsvcid": "$NVMF_PORT", 00:11:07.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:07.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:07.960 "hdgst": ${hdgst:-false}, 00:11:07.960 "ddgst": ${ddgst:-false} 00:11:07.960 }, 00:11:07.960 "method": "bdev_nvme_attach_controller" 00:11:07.960 } 00:11:07.960 EOF 00:11:07.960 )") 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:07.960 13:42:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:07.960 "params": { 00:11:07.960 "name": "Nvme0", 00:11:07.960 "trtype": "tcp", 00:11:07.960 "traddr": "10.0.0.2", 00:11:07.960 "adrfam": "ipv4", 00:11:07.960 "trsvcid": "4420", 00:11:07.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:07.960 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:07.960 "hdgst": false, 00:11:07.960 "ddgst": false 00:11:07.960 }, 00:11:07.960 "method": "bdev_nvme_attach_controller" 00:11:07.960 }' 00:11:08.219 [2024-12-05 13:42:39.507923] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:08.219 [2024-12-05 13:42:39.507998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142781 ] 00:11:08.219 [2024-12-05 13:42:39.576786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.219 [2024-12-05 13:42:39.634789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.478 Running I/O for 10 seconds... 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:11:08.735 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.996 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.996 [2024-12-05 13:42:40.374515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.374989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.375000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.375012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.375024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.375035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.375046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.375058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.375069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.375080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.375092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.996 [2024-12-05 13:42:40.375103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059870 is same with the state(6) to be set 00:11:08.997 [2024-12-05 13:42:40.375552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.375592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.375621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.375637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.375654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.375669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.375684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.375698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.375713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.375734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.375749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.375763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.375778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.375792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.375807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.375820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.375835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.375849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.375864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.375878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.375893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.375907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.375922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.375935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.375950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.375971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.375987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.376000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.376016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.376030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.376045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.376059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.376074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.997 [2024-12-05 13:42:40.376088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.376103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.376117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.376132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.376146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.376161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.376175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.376190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.376204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.376220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.376233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:08.997 [2024-12-05 13:42:40.376248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.376262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.376277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.376290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.376310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.997 [2024-12-05 13:42:40.376324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.997 [2024-12-05 13:42:40.376339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.998 [2024-12-05 13:42:40.376369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.998 [2024-12-05 13:42:40.376500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.376985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.376999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.998 [2024-12-05 13:42:40.377511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.998 [2024-12-05 13:42:40.377526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfd740 is same with the state(6) to be set 00:11:08.999 [2024-12-05 13:42:40.377657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.999 [2024-12-05 13:42:40.377680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.999 [2024-12-05 13:42:40.377695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.999 [2024-12-05 13:42:40.377715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.999 [2024-12-05 13:42:40.377729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.999 [2024-12-05 13:42:40.377742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.999 [2024-12-05 13:42:40.377756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.999 [2024-12-05 13:42:40.377769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.999 [2024-12-05 13:42:40.377781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe4a50 is same with the state(6) to be set 00:11:08.999 [2024-12-05 13:42:40.378929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:11:08.999 task offset: 73728 on job bdev=Nvme0n1 fails 00:11:08.999 00:11:08.999 Latency(us) 00:11:08.999 [2024-12-05T12:42:40.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.999 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:08.999 Job: Nvme0n1 ended in about 0.40 seconds with error 00:11:08.999 Verification LBA range: start 0x0 length 0x400 00:11:08.999 Nvme0n1 : 0.40 1456.52 91.03 161.84 0.00 38413.62 6699.24 36505.98 00:11:08.999 [2024-12-05T12:42:40.525Z] =================================================================================================================== 00:11:08.999 [2024-12-05T12:42:40.525Z] Total : 1456.52 91.03 161.84 0.00 38413.62 6699.24 36505.98 00:11:08.999 [2024-12-05 13:42:40.381026] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:08.999 [2024-12-05 13:42:40.381054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe4a50 (9): Bad file descriptor 00:11:08.999 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.999 13:42:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:08.999 [2024-12-05 13:42:40.389905] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:11:09.932 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2142781 00:11:09.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2142781) - No such process 00:11:09.932 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:09.932 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:09.932 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:09.932 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:09.932 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:09.932 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:09.932 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:09.932 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:09.932 { 00:11:09.932 "params": { 00:11:09.932 "name": "Nvme$subsystem", 00:11:09.932 "trtype": "$TEST_TRANSPORT", 00:11:09.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.932 "adrfam": "ipv4", 00:11:09.932 "trsvcid": "$NVMF_PORT", 00:11:09.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.932 "hdgst": ${hdgst:-false}, 00:11:09.932 "ddgst": ${ddgst:-false} 00:11:09.932 }, 00:11:09.932 "method": "bdev_nvme_attach_controller" 00:11:09.932 } 00:11:09.932 EOF 00:11:09.932 )") 00:11:09.932 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:09.932 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:09.932 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:09.932 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:09.932 "params": { 00:11:09.932 "name": "Nvme0", 00:11:09.932 "trtype": "tcp", 00:11:09.932 "traddr": "10.0.0.2", 00:11:09.932 "adrfam": "ipv4", 00:11:09.932 "trsvcid": "4420", 00:11:09.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:09.932 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:09.932 "hdgst": false, 00:11:09.932 "ddgst": false 00:11:09.932 }, 00:11:09.932 "method": "bdev_nvme_attach_controller" 00:11:09.932 }' 00:11:09.932 [2024-12-05 13:42:41.432969] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:09.932 [2024-12-05 13:42:41.433043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142945 ] 00:11:10.189 [2024-12-05 13:42:41.505024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.189 [2024-12-05 13:42:41.562455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.446 Running I/O for 1 seconds... 00:11:11.816 1664.00 IOPS, 104.00 MiB/s 00:11:11.816 Latency(us) 00:11:11.816 [2024-12-05T12:42:43.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.816 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:11.816 Verification LBA range: start 0x0 length 0x400 00:11:11.816 Nvme0n1 : 1.01 1703.81 106.49 0.00 0.00 36944.13 6189.51 33204.91 00:11:11.816 [2024-12-05T12:42:43.342Z] =================================================================================================================== 00:11:11.816 [2024-12-05T12:42:43.342Z] Total : 1703.81 106.49 0.00 0.00 36944.13 6189.51 33204.91 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:11.816 rmmod nvme_tcp 00:11:11.816 rmmod nvme_fabrics 00:11:11.816 rmmod nvme_keyring 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2142616 ']' 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2142616 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2142616 ']' 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2142616 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2142616 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2142616' 00:11:11.816 killing process with pid 2142616 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2142616 00:11:11.816 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2142616 00:11:12.074 [2024-12-05 13:42:43.480666] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:12.074 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:12.074 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:12.074 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:12.074 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:11:12.074 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:11:12.074 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:12.074 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:11:12.074 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:12.074 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:12.074 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.074 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.074 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:14.615 00:11:14.615 real 0m9.050s 00:11:14.615 user 0m20.330s 00:11:14.615 sys 0m2.890s 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:14.615 ************************************ 00:11:14.615 END TEST nvmf_host_management 00:11:14.615 ************************************ 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:14.615 ************************************ 00:11:14.615 START TEST nvmf_lvol 00:11:14.615 ************************************ 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:14.615 * Looking for test storage... 00:11:14.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:14.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.615 --rc genhtml_branch_coverage=1 00:11:14.615 --rc genhtml_function_coverage=1 00:11:14.615 --rc genhtml_legend=1 00:11:14.615 --rc geninfo_all_blocks=1 00:11:14.615 --rc geninfo_unexecuted_blocks=1 00:11:14.615 00:11:14.615 ' 00:11:14.615 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:14.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.616 --rc genhtml_branch_coverage=1 00:11:14.616 --rc genhtml_function_coverage=1 00:11:14.616 --rc genhtml_legend=1 00:11:14.616 --rc geninfo_all_blocks=1 00:11:14.616 --rc geninfo_unexecuted_blocks=1 00:11:14.616 00:11:14.616 ' 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:14.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.616 --rc genhtml_branch_coverage=1 00:11:14.616 --rc genhtml_function_coverage=1 00:11:14.616 --rc genhtml_legend=1 00:11:14.616 --rc geninfo_all_blocks=1 00:11:14.616 --rc geninfo_unexecuted_blocks=1 00:11:14.616 00:11:14.616 ' 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:14.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.616 --rc genhtml_branch_coverage=1 00:11:14.616 --rc genhtml_function_coverage=1 00:11:14.616 --rc genhtml_legend=1 00:11:14.616 --rc geninfo_all_blocks=1 00:11:14.616 --rc geninfo_unexecuted_blocks=1 00:11:14.616 00:11:14.616 ' 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:11:14.616 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.631 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:16.632 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:16.632 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:16.632 Found net devices under 0000:09:00.0: cvl_0_0 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:16.632 Found net devices under 0000:09:00.1: cvl_0_1 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:16.632 13:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:16.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:11:16.632 00:11:16.632 --- 10.0.0.2 ping statistics --- 00:11:16.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.632 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:11:16.632 00:11:16.632 --- 10.0.0.1 ping statistics --- 00:11:16.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.632 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2145155 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2145155 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2145155 ']' 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.632 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:16.633 [2024-12-05 13:42:48.125797] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:16.633 [2024-12-05 13:42:48.125892] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.892 [2024-12-05 13:42:48.195834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:16.892 [2024-12-05 13:42:48.255781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.892 [2024-12-05 13:42:48.255826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.892 [2024-12-05 13:42:48.255856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.892 [2024-12-05 13:42:48.255880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.892 [2024-12-05 13:42:48.255890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.892 [2024-12-05 13:42:48.257441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.892 [2024-12-05 13:42:48.257465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.892 [2024-12-05 13:42:48.257469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.892 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.892 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:11:16.892 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.892 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.892 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:16.892 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.892 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:17.150 [2024-12-05 13:42:48.633882] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.150 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:17.717 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:17.717 13:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:17.717 13:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:17.717 13:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:18.282 13:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:18.283 13:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c2b01505-e670-464a-9607-38adcc954869 00:11:18.283 13:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c2b01505-e670-464a-9607-38adcc954869 lvol 20 00:11:18.850 13:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=08b8d7cd-ee5c-4dea-9f74-dc747ae0bda7 00:11:18.850 13:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:18.850 13:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 08b8d7cd-ee5c-4dea-9f74-dc747ae0bda7 00:11:19.108 13:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:19.365 [2024-12-05 13:42:50.869369] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.623 13:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:19.881 13:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2145586 00:11:19.881 13:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:19.881 13:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:20.814 13:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 08b8d7cd-ee5c-4dea-9f74-dc747ae0bda7 MY_SNAPSHOT 00:11:21.071 13:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=af4f8e36-9253-4c2c-aeda-64f2b1f021f6 00:11:21.071 13:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 08b8d7cd-ee5c-4dea-9f74-dc747ae0bda7 30 00:11:21.330 13:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone af4f8e36-9253-4c2c-aeda-64f2b1f021f6 MY_CLONE 00:11:21.895 13:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cc993d48-8c45-4381-b51e-92805e6bd12c 00:11:21.895 13:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate cc993d48-8c45-4381-b51e-92805e6bd12c 00:11:22.461 13:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2145586 00:11:30.568 Initializing NVMe Controllers 00:11:30.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:30.568 Controller IO queue size 128, less than required. 00:11:30.568 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:30.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:30.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:30.568 Initialization complete. Launching workers. 00:11:30.568 ======================================================== 00:11:30.568 Latency(us) 00:11:30.568 Device Information : IOPS MiB/s Average min max 00:11:30.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10463.60 40.87 12235.25 1754.82 71689.13 00:11:30.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10449.80 40.82 12251.53 2116.81 62456.96 00:11:30.568 ======================================================== 00:11:30.568 Total : 20913.40 81.69 12243.38 1754.82 71689.13 00:11:30.568 00:11:30.568 13:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:30.568 13:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 08b8d7cd-ee5c-4dea-9f74-dc747ae0bda7 00:11:30.826 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c2b01505-e670-464a-9607-38adcc954869 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.084 rmmod nvme_tcp 00:11:31.084 rmmod nvme_fabrics 00:11:31.084 rmmod nvme_keyring 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2145155 ']' 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2145155 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2145155 ']' 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2145155 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2145155 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2145155' 00:11:31.084 killing process with pid 2145155 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2145155 00:11:31.084 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2145155 00:11:31.345 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:31.345 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:31.345 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:31.345 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:31.345 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:11:31.345 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:31.345 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:11:31.345 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.345 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:31.345 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.345 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.345 13:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.889 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.889 00:11:33.889 real 0m19.209s 00:11:33.889 user 1m5.402s 00:11:33.889 sys 0m5.438s 00:11:33.889 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.889 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:33.889 ************************************ 00:11:33.889 END TEST nvmf_lvol 00:11:33.889 ************************************ 00:11:33.889 13:43:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:33.889 13:43:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.889 13:43:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.889 13:43:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.889 ************************************ 00:11:33.889 START TEST nvmf_lvs_grow 00:11:33.889 ************************************ 00:11:33.889 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:33.889 * Looking for test storage... 00:11:33.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.889 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:33.889 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:11:33.889 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:33.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.889 --rc genhtml_branch_coverage=1 00:11:33.889 --rc genhtml_function_coverage=1 00:11:33.889 --rc genhtml_legend=1 00:11:33.889 --rc geninfo_all_blocks=1 00:11:33.889 --rc geninfo_unexecuted_blocks=1 00:11:33.889 00:11:33.889 ' 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:33.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.889 --rc genhtml_branch_coverage=1 00:11:33.889 --rc genhtml_function_coverage=1 00:11:33.889 --rc genhtml_legend=1 00:11:33.889 --rc geninfo_all_blocks=1 00:11:33.889 --rc geninfo_unexecuted_blocks=1 00:11:33.889 00:11:33.889 ' 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:33.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.889 --rc genhtml_branch_coverage=1 00:11:33.889 --rc genhtml_function_coverage=1 00:11:33.889 --rc genhtml_legend=1 00:11:33.889 --rc geninfo_all_blocks=1 00:11:33.889 --rc geninfo_unexecuted_blocks=1 00:11:33.889 00:11:33.889 ' 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:33.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.889 --rc genhtml_branch_coverage=1 00:11:33.889 --rc genhtml_function_coverage=1 00:11:33.889 --rc genhtml_legend=1 00:11:33.889 --rc geninfo_all_blocks=1 00:11:33.889 --rc geninfo_unexecuted_blocks=1 00:11:33.889 00:11:33.889 ' 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.889 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.890 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:35.795 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:35.796 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:35.796 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:35.796 Found net devices under 0000:09:00.0: cvl_0_0 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:35.796 Found net devices under 0000:09:00.1: cvl_0_1 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:35.796 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.056 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.056 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.056 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:36.056 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:36.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:11:36.056 00:11:36.056 --- 10.0.0.2 ping statistics --- 00:11:36.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.056 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:11:36.056 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:11:36.056 00:11:36.056 --- 10.0.0.1 ping statistics --- 00:11:36.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.056 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:11:36.056 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.056 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:11:36.056 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:36.056 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.056 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:36.056 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:36.056 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.056 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:36.057 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:36.057 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:36.057 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:36.057 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.057 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:36.057 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2148873 00:11:36.057 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:36.057 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2148873 00:11:36.057 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2148873 ']' 00:11:36.057 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.057 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.057 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.057 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.057 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:36.057 [2024-12-05 13:43:07.436960] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:36.057 [2024-12-05 13:43:07.437038] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.057 [2024-12-05 13:43:07.510006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.057 [2024-12-05 13:43:07.565122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.057 [2024-12-05 13:43:07.565172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.057 [2024-12-05 13:43:07.565185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.057 [2024-12-05 13:43:07.565197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.057 [2024-12-05 13:43:07.565207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.057 [2024-12-05 13:43:07.565813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.315 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.315 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:11:36.315 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:36.315 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:36.315 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:36.315 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.315 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:36.573 [2024-12-05 13:43:07.947700] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.573 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:36.573 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:36.573 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.573 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:36.573 ************************************ 00:11:36.573 START TEST lvs_grow_clean 00:11:36.573 ************************************ 00:11:36.573 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:11:36.573 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:36.573 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:36.573 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:36.573 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:36.573 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:36.573 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:36.573 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:36.573 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:36.573 13:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:36.831 13:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:36.831 13:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:37.089 13:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=99e4e93f-35a3-4921-bea9-164eb32dc1fb 00:11:37.090 13:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99e4e93f-35a3-4921-bea9-164eb32dc1fb 00:11:37.090 13:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:37.347 13:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:37.347 13:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:37.347 13:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 99e4e93f-35a3-4921-bea9-164eb32dc1fb lvol 150 00:11:37.606 13:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8d64e2e2-9d55-49b5-b79f-b7a5ea28d098 00:11:37.606 13:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:37.606 13:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:37.894 [2024-12-05 13:43:09.359811] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:37.894 [2024-12-05 13:43:09.359913] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:37.894 true 00:11:37.894 13:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99e4e93f-35a3-4921-bea9-164eb32dc1fb 00:11:37.894 13:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:38.151 13:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:38.151 13:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:38.408 13:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8d64e2e2-9d55-49b5-b79f-b7a5ea28d098 00:11:38.666 13:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:39.230 [2024-12-05 13:43:10.447189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.230 13:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:39.230 13:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2149318 00:11:39.230 13:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:39.230 13:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:39.230 13:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2149318 /var/tmp/bdevperf.sock 00:11:39.230 13:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2149318 ']' 00:11:39.230 13:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:39.230 13:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.230 13:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:39.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:39.230 13:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.230 13:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:39.488 [2024-12-05 13:43:10.781631] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:39.488 [2024-12-05 13:43:10.781713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2149318 ] 00:11:39.488 [2024-12-05 13:43:10.845719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.488 [2024-12-05 13:43:10.901951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.745 13:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.745 13:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:11:39.745 13:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:40.024 Nvme0n1 00:11:40.024 13:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:40.318 [ 00:11:40.318 { 00:11:40.318 "name": "Nvme0n1", 00:11:40.318 "aliases": [ 00:11:40.318 "8d64e2e2-9d55-49b5-b79f-b7a5ea28d098" 00:11:40.318 ], 00:11:40.318 "product_name": "NVMe disk", 00:11:40.318 "block_size": 4096, 00:11:40.318 "num_blocks": 38912, 00:11:40.318 "uuid": "8d64e2e2-9d55-49b5-b79f-b7a5ea28d098", 00:11:40.318 "numa_id": 0, 00:11:40.318 "assigned_rate_limits": { 00:11:40.318 "rw_ios_per_sec": 0, 00:11:40.318 "rw_mbytes_per_sec": 0, 00:11:40.318 "r_mbytes_per_sec": 0, 00:11:40.318 "w_mbytes_per_sec": 0 00:11:40.318 }, 00:11:40.318 "claimed": false, 00:11:40.318 "zoned": false, 00:11:40.318 "supported_io_types": { 00:11:40.318 "read": true, 00:11:40.318 "write": true, 00:11:40.318 "unmap": true, 00:11:40.318 "flush": true, 00:11:40.318 "reset": true, 00:11:40.318 "nvme_admin": true, 00:11:40.318 "nvme_io": true, 00:11:40.318 "nvme_io_md": false, 00:11:40.318 "write_zeroes": true, 00:11:40.318 "zcopy": false, 00:11:40.318 "get_zone_info": false, 00:11:40.318 "zone_management": false, 00:11:40.318 "zone_append": false, 00:11:40.318 "compare": true, 00:11:40.318 "compare_and_write": true, 00:11:40.318 "abort": true, 00:11:40.318 "seek_hole": false, 00:11:40.318 "seek_data": false, 00:11:40.318 "copy": true, 00:11:40.318 "nvme_iov_md": false 00:11:40.318 }, 00:11:40.318 "memory_domains": [ 00:11:40.318 { 00:11:40.318 "dma_device_id": "system", 00:11:40.318 "dma_device_type": 1 00:11:40.318 } 00:11:40.318 ], 00:11:40.318 "driver_specific": { 00:11:40.318 "nvme": [ 00:11:40.318 { 00:11:40.318 "trid": { 00:11:40.318 "trtype": "TCP", 00:11:40.318 "adrfam": "IPv4", 00:11:40.318 "traddr": "10.0.0.2", 00:11:40.318 "trsvcid": "4420", 00:11:40.318 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:40.318 }, 00:11:40.318 "ctrlr_data": { 00:11:40.318 "cntlid": 1, 00:11:40.318 "vendor_id": "0x8086", 00:11:40.318 "model_number": "SPDK bdev Controller", 00:11:40.318 "serial_number": "SPDK0", 00:11:40.318 "firmware_revision": "25.01", 00:11:40.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:40.318 "oacs": { 00:11:40.318 "security": 0, 00:11:40.318 "format": 0, 00:11:40.318 "firmware": 0, 00:11:40.318 "ns_manage": 0 00:11:40.318 }, 00:11:40.318 "multi_ctrlr": true, 00:11:40.318 "ana_reporting": false 00:11:40.318 }, 00:11:40.318 "vs": { 00:11:40.318 "nvme_version": "1.3" 00:11:40.318 }, 00:11:40.318 "ns_data": { 00:11:40.318 "id": 1, 00:11:40.318 "can_share": true 00:11:40.318 } 00:11:40.318 } 00:11:40.318 ], 00:11:40.318 "mp_policy": "active_passive" 00:11:40.318 } 00:11:40.318 } 00:11:40.318 ] 00:11:40.318 13:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2149453 00:11:40.318 13:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:40.318 13:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:40.318 Running I/O for 10 seconds... 00:11:41.251 Latency(us) 00:11:41.251 [2024-12-05T12:43:12.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.251 Nvme0n1 : 1.00 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:11:41.251 [2024-12-05T12:43:12.777Z] =================================================================================================================== 00:11:41.251 [2024-12-05T12:43:12.777Z] Total : 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:11:41.251 00:11:42.183 13:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 99e4e93f-35a3-4921-bea9-164eb32dc1fb 00:11:42.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.441 Nvme0n1 : 2.00 15353.50 59.97 0.00 0.00 0.00 0.00 0.00 00:11:42.441 [2024-12-05T12:43:13.967Z] =================================================================================================================== 00:11:42.441 [2024-12-05T12:43:13.967Z] Total : 15353.50 59.97 0.00 0.00 0.00 0.00 0.00 00:11:42.441 00:11:42.441 true 00:11:42.441 13:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99e4e93f-35a3-4921-bea9-164eb32dc1fb 00:11:42.441 13:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:43.008 13:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:43.008 13:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:43.008 13:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2149453 00:11:43.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:43.268 Nvme0n1 : 3.00 15464.00 60.41 0.00 0.00 0.00 0.00 0.00 00:11:43.268 [2024-12-05T12:43:14.794Z] =================================================================================================================== 00:11:43.268 [2024-12-05T12:43:14.794Z] Total : 15464.00 60.41 0.00 0.00 0.00 0.00 0.00 00:11:43.268 00:11:44.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.647 Nvme0n1 : 4.00 15582.50 60.87 0.00 0.00 0.00 0.00 0.00 00:11:44.647 [2024-12-05T12:43:16.173Z] =================================================================================================================== 00:11:44.647 [2024-12-05T12:43:16.173Z] Total : 15582.50 60.87 0.00 0.00 0.00 0.00 0.00 00:11:44.647 00:11:45.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:45.588 Nvme0n1 : 5.00 15679.40 61.25 0.00 0.00 0.00 0.00 0.00 00:11:45.588 [2024-12-05T12:43:17.114Z] =================================================================================================================== 00:11:45.588 [2024-12-05T12:43:17.114Z] Total : 15679.40 61.25 0.00 0.00 0.00 0.00 0.00 00:11:45.588 00:11:46.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.523 Nvme0n1 : 6.00 15743.83 61.50 0.00 0.00 0.00 0.00 0.00 00:11:46.523 [2024-12-05T12:43:18.049Z] =================================================================================================================== 00:11:46.523 [2024-12-05T12:43:18.049Z] Total : 15743.83 61.50 0.00 0.00 0.00 0.00 0.00 00:11:46.523 00:11:47.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.461 Nvme0n1 : 7.00 15794.57 61.70 0.00 0.00 0.00 0.00 0.00 00:11:47.461 [2024-12-05T12:43:18.988Z] =================================================================================================================== 00:11:47.462 [2024-12-05T12:43:18.988Z] Total : 15794.57 61.70 0.00 0.00 0.00 0.00 0.00 00:11:47.462 00:11:48.402 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.402 Nvme0n1 : 8.00 15836.38 61.86 0.00 0.00 0.00 0.00 0.00 00:11:48.402 [2024-12-05T12:43:19.928Z] =================================================================================================================== 00:11:48.402 [2024-12-05T12:43:19.928Z] Total : 15836.38 61.86 0.00 0.00 0.00 0.00 0.00 00:11:48.402 00:11:49.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.343 Nvme0n1 : 9.00 15854.78 61.93 0.00 0.00 0.00 0.00 0.00 00:11:49.343 [2024-12-05T12:43:20.869Z] =================================================================================================================== 00:11:49.343 [2024-12-05T12:43:20.869Z] Total : 15854.78 61.93 0.00 0.00 0.00 0.00 0.00 00:11:49.343 00:11:50.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.280 Nvme0n1 : 10.00 15882.70 62.04 0.00 0.00 0.00 0.00 0.00 00:11:50.280 [2024-12-05T12:43:21.806Z] =================================================================================================================== 00:11:50.280 [2024-12-05T12:43:21.806Z] Total : 15882.70 62.04 0.00 0.00 0.00 0.00 0.00 00:11:50.280 00:11:50.280 00:11:50.280 Latency(us) 00:11:50.280 [2024-12-05T12:43:21.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.280 Nvme0n1 : 10.00 15882.42 62.04 0.00 0.00 8053.81 4344.79 17476.27 00:11:50.280 [2024-12-05T12:43:21.806Z] =================================================================================================================== 00:11:50.280 [2024-12-05T12:43:21.806Z] Total : 15882.42 62.04 0.00 0.00 8053.81 4344.79 17476.27 00:11:50.280 { 00:11:50.280 "results": [ 00:11:50.280 { 00:11:50.280 "job": "Nvme0n1", 00:11:50.280 "core_mask": "0x2", 00:11:50.280 "workload": "randwrite", 00:11:50.280 "status": "finished", 00:11:50.280 "queue_depth": 128, 00:11:50.280 "io_size": 4096, 00:11:50.280 "runtime": 10.004209, 00:11:50.280 "iops": 15882.415091487992, 00:11:50.280 "mibps": 62.04068395112497, 00:11:50.280 "io_failed": 0, 00:11:50.280 "io_timeout": 0, 00:11:50.280 "avg_latency_us": 8053.807799159778, 00:11:50.280 "min_latency_us": 4344.794074074074, 00:11:50.280 "max_latency_us": 17476.266666666666 00:11:50.280 } 00:11:50.280 ], 00:11:50.280 "core_count": 1 00:11:50.280 } 00:11:50.280 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2149318 00:11:50.280 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2149318 ']' 00:11:50.280 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2149318 00:11:50.280 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:11:50.280 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.280 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2149318 00:11:50.537 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:50.537 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:50.537 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2149318' 00:11:50.537 killing process with pid 2149318 00:11:50.537 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2149318 00:11:50.537 Received shutdown signal, test time was about 10.000000 seconds 00:11:50.537 00:11:50.537 Latency(us) 00:11:50.537 [2024-12-05T12:43:22.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.537 [2024-12-05T12:43:22.063Z] =================================================================================================================== 00:11:50.537 [2024-12-05T12:43:22.063Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:50.537 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2149318 00:11:50.537 13:43:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:50.795 13:43:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:51.363 13:43:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99e4e93f-35a3-4921-bea9-164eb32dc1fb 00:11:51.363 13:43:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:51.363 13:43:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:51.363 13:43:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:51.363 13:43:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:51.622 [2024-12-05 13:43:23.097188] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:51.622 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99e4e93f-35a3-4921-bea9-164eb32dc1fb 00:11:51.622 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:11:51.622 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99e4e93f-35a3-4921-bea9-164eb32dc1fb 00:11:51.622 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.622 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:51.622 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.622 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:51.622 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.622 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:51.622 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.622 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:51.622 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99e4e93f-35a3-4921-bea9-164eb32dc1fb 00:11:51.880 request: 00:11:51.880 { 00:11:51.880 "uuid": "99e4e93f-35a3-4921-bea9-164eb32dc1fb", 00:11:51.880 "method": "bdev_lvol_get_lvstores", 00:11:51.880 "req_id": 1 00:11:51.880 } 00:11:51.880 Got JSON-RPC error response 00:11:51.880 response: 00:11:51.880 { 00:11:51.880 "code": -19, 00:11:51.880 "message": "No such device" 00:11:51.880 } 00:11:52.138 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:11:52.138 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:52.138 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:52.138 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:52.138 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:52.397 aio_bdev 00:11:52.398 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8d64e2e2-9d55-49b5-b79f-b7a5ea28d098 00:11:52.398 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=8d64e2e2-9d55-49b5-b79f-b7a5ea28d098 00:11:52.398 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.398 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:11:52.398 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.398 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.398 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:52.656 13:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8d64e2e2-9d55-49b5-b79f-b7a5ea28d098 -t 2000 00:11:52.914 [ 00:11:52.914 { 00:11:52.914 "name": "8d64e2e2-9d55-49b5-b79f-b7a5ea28d098", 00:11:52.914 "aliases": [ 00:11:52.914 "lvs/lvol" 00:11:52.914 ], 00:11:52.914 "product_name": "Logical Volume", 00:11:52.914 "block_size": 4096, 00:11:52.914 "num_blocks": 38912, 00:11:52.914 "uuid": "8d64e2e2-9d55-49b5-b79f-b7a5ea28d098", 00:11:52.914 "assigned_rate_limits": { 00:11:52.914 "rw_ios_per_sec": 0, 00:11:52.914 "rw_mbytes_per_sec": 0, 00:11:52.914 "r_mbytes_per_sec": 0, 00:11:52.914 "w_mbytes_per_sec": 0 00:11:52.914 }, 00:11:52.914 "claimed": false, 00:11:52.914 "zoned": false, 00:11:52.914 "supported_io_types": { 00:11:52.914 "read": true, 00:11:52.914 "write": true, 00:11:52.914 "unmap": true, 00:11:52.914 "flush": false, 00:11:52.914 "reset": true, 00:11:52.914 "nvme_admin": false, 00:11:52.914 "nvme_io": false, 00:11:52.914 "nvme_io_md": false, 00:11:52.914 "write_zeroes": true, 00:11:52.914 "zcopy": false, 00:11:52.914 "get_zone_info": false, 00:11:52.914 "zone_management": false, 00:11:52.914 "zone_append": false, 00:11:52.914 "compare": false, 00:11:52.914 "compare_and_write": false, 00:11:52.914 "abort": false, 00:11:52.914 "seek_hole": true, 00:11:52.914 "seek_data": true, 00:11:52.914 "copy": false, 00:11:52.914 "nvme_iov_md": false 00:11:52.914 }, 00:11:52.914 "driver_specific": { 00:11:52.914 "lvol": { 00:11:52.914 "lvol_store_uuid": "99e4e93f-35a3-4921-bea9-164eb32dc1fb", 00:11:52.914 "base_bdev": "aio_bdev", 00:11:52.914 "thin_provision": false, 00:11:52.914 "num_allocated_clusters": 38, 00:11:52.914 "snapshot": false, 00:11:52.914 "clone": false, 00:11:52.914 "esnap_clone": false 00:11:52.914 } 00:11:52.914 } 00:11:52.914 } 00:11:52.914 ] 00:11:52.914 13:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:11:52.914 13:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:52.914 13:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99e4e93f-35a3-4921-bea9-164eb32dc1fb 00:11:53.173 13:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:53.173 13:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99e4e93f-35a3-4921-bea9-164eb32dc1fb 00:11:53.173 13:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:53.433 13:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:53.433 13:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8d64e2e2-9d55-49b5-b79f-b7a5ea28d098 00:11:53.693 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 99e4e93f-35a3-4921-bea9-164eb32dc1fb 00:11:53.951 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:54.208 00:11:54.208 real 0m17.617s 00:11:54.208 user 0m17.209s 00:11:54.208 sys 0m1.818s 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:54.208 ************************************ 00:11:54.208 END TEST lvs_grow_clean 00:11:54.208 ************************************ 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:54.208 ************************************ 00:11:54.208 START TEST lvs_grow_dirty 00:11:54.208 ************************************ 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:54.208 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:54.467 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:54.467 13:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:54.725 13:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ce8170fe-912b-42ca-a8c5-2230ee97dd30 00:11:54.725 13:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce8170fe-912b-42ca-a8c5-2230ee97dd30 00:11:54.725 13:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:54.983 13:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:54.983 13:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:54.983 13:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ce8170fe-912b-42ca-a8c5-2230ee97dd30 lvol 150 00:11:55.548 13:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=68760dd5-ddb3-4a8e-aa3d-c804fc907025 00:11:55.548 13:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:55.548 13:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:55.548 [2024-12-05 13:43:27.026780] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:55.548 [2024-12-05 13:43:27.026853] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:55.548 true 00:11:55.548 13:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce8170fe-912b-42ca-a8c5-2230ee97dd30 00:11:55.548 13:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:55.806 13:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:55.806 13:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:56.064 13:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 68760dd5-ddb3-4a8e-aa3d-c804fc907025 00:11:56.631 13:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:56.631 [2024-12-05 13:43:28.110123] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.631 13:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:56.889 13:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2151503 00:11:56.889 13:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:56.889 13:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:56.889 13:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2151503 /var/tmp/bdevperf.sock 00:11:56.889 13:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2151503 ']' 00:11:56.889 13:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:56.889 13:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.889 13:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:56.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:56.889 13:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.889 13:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:57.148 [2024-12-05 13:43:28.437012] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:57.148 [2024-12-05 13:43:28.437085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151503 ] 00:11:57.148 [2024-12-05 13:43:28.502154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.148 [2024-12-05 13:43:28.562865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.406 13:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.406 13:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:57.406 13:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:57.662 Nvme0n1 00:11:57.662 13:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:57.919 [ 00:11:57.919 { 00:11:57.919 "name": "Nvme0n1", 00:11:57.919 "aliases": [ 00:11:57.919 "68760dd5-ddb3-4a8e-aa3d-c804fc907025" 00:11:57.919 ], 00:11:57.919 "product_name": "NVMe disk", 00:11:57.919 "block_size": 4096, 00:11:57.919 "num_blocks": 38912, 00:11:57.919 "uuid": "68760dd5-ddb3-4a8e-aa3d-c804fc907025", 00:11:57.919 "numa_id": 0, 00:11:57.919 "assigned_rate_limits": { 00:11:57.919 "rw_ios_per_sec": 0, 00:11:57.919 "rw_mbytes_per_sec": 0, 00:11:57.919 "r_mbytes_per_sec": 0, 00:11:57.919 "w_mbytes_per_sec": 0 00:11:57.919 }, 00:11:57.919 "claimed": false, 00:11:57.919 "zoned": false, 00:11:57.919 "supported_io_types": { 00:11:57.919 "read": true, 00:11:57.919 "write": true, 00:11:57.919 "unmap": true, 00:11:57.919 "flush": true, 00:11:57.919 "reset": true, 00:11:57.919 "nvme_admin": true, 00:11:57.919 "nvme_io": true, 00:11:57.919 "nvme_io_md": false, 00:11:57.919 "write_zeroes": true, 00:11:57.919 "zcopy": false, 00:11:57.919 "get_zone_info": false, 00:11:57.919 "zone_management": false, 00:11:57.919 "zone_append": false, 00:11:57.919 "compare": true, 00:11:57.919 "compare_and_write": true, 00:11:57.919 "abort": true, 00:11:57.919 "seek_hole": false, 00:11:57.919 "seek_data": false, 00:11:57.919 "copy": true, 00:11:57.919 "nvme_iov_md": false 00:11:57.919 }, 00:11:57.919 "memory_domains": [ 00:11:57.919 { 00:11:57.919 "dma_device_id": "system", 00:11:57.919 "dma_device_type": 1 00:11:57.919 } 00:11:57.919 ], 00:11:57.919 "driver_specific": { 00:11:57.919 "nvme": [ 00:11:57.919 { 00:11:57.919 "trid": { 00:11:57.919 "trtype": "TCP", 00:11:57.919 "adrfam": "IPv4", 00:11:57.919 "traddr": "10.0.0.2", 00:11:57.919 "trsvcid": "4420", 00:11:57.919 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:57.919 }, 00:11:57.919 "ctrlr_data": { 00:11:57.919 "cntlid": 1, 00:11:57.919 "vendor_id": "0x8086", 00:11:57.919 "model_number": "SPDK bdev Controller", 00:11:57.919 "serial_number": "SPDK0", 00:11:57.919 "firmware_revision": "25.01", 00:11:57.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:57.919 "oacs": { 00:11:57.919 "security": 0, 00:11:57.919 "format": 0, 00:11:57.919 "firmware": 0, 00:11:57.919 "ns_manage": 0 00:11:57.919 }, 00:11:57.919 "multi_ctrlr": true, 00:11:57.919 "ana_reporting": false 00:11:57.919 }, 00:11:57.919 "vs": { 00:11:57.919 "nvme_version": "1.3" 00:11:57.919 }, 00:11:57.919 "ns_data": { 00:11:57.919 "id": 1, 00:11:57.919 "can_share": true 00:11:57.919 } 00:11:57.919 } 00:11:57.919 ], 00:11:57.919 "mp_policy": "active_passive" 00:11:57.919 } 00:11:57.919 } 00:11:57.919 ] 00:11:57.919 13:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2151638 00:11:57.919 13:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:57.919 13:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:58.177 Running I/O for 10 seconds... 00:11:59.108 Latency(us) 00:11:59.108 [2024-12-05T12:43:30.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:59.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.109 Nvme0n1 : 1.00 14895.00 58.18 0.00 0.00 0.00 0.00 0.00 00:11:59.109 [2024-12-05T12:43:30.635Z] =================================================================================================================== 00:11:59.109 [2024-12-05T12:43:30.635Z] Total : 14895.00 58.18 0.00 0.00 0.00 0.00 0.00 00:11:59.109 00:12:00.046 13:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ce8170fe-912b-42ca-a8c5-2230ee97dd30 00:12:00.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.046 Nvme0n1 : 2.00 15227.00 59.48 0.00 0.00 0.00 0.00 0.00 00:12:00.046 [2024-12-05T12:43:31.572Z] =================================================================================================================== 00:12:00.046 [2024-12-05T12:43:31.572Z] Total : 15227.00 59.48 0.00 0.00 0.00 0.00 0.00 00:12:00.046 00:12:00.304 true 00:12:00.304 13:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce8170fe-912b-42ca-a8c5-2230ee97dd30 00:12:00.304 13:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:00.564 13:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:00.564 13:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:00.564 13:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2151638 00:12:01.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.223 Nvme0n1 : 3.00 15358.33 59.99 0.00 0.00 0.00 0.00 0.00 00:12:01.223 [2024-12-05T12:43:32.750Z] =================================================================================================================== 00:12:01.224 [2024-12-05T12:43:32.750Z] Total : 15358.33 59.99 0.00 0.00 0.00 0.00 0.00 00:12:01.224 00:12:02.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.161 Nvme0n1 : 4.00 15472.00 60.44 0.00 0.00 0.00 0.00 0.00 00:12:02.161 [2024-12-05T12:43:33.687Z] =================================================================================================================== 00:12:02.161 [2024-12-05T12:43:33.687Z] Total : 15472.00 60.44 0.00 0.00 0.00 0.00 0.00 00:12:02.161 00:12:03.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.100 Nvme0n1 : 5.00 15565.60 60.80 0.00 0.00 0.00 0.00 0.00 00:12:03.100 [2024-12-05T12:43:34.626Z] =================================================================================================================== 00:12:03.100 [2024-12-05T12:43:34.626Z] Total : 15565.60 60.80 0.00 0.00 0.00 0.00 0.00 00:12:03.100 00:12:04.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.038 Nvme0n1 : 6.00 15617.17 61.00 0.00 0.00 0.00 0.00 0.00 00:12:04.038 [2024-12-05T12:43:35.565Z] =================================================================================================================== 00:12:04.039 [2024-12-05T12:43:35.565Z] Total : 15617.17 61.00 0.00 0.00 0.00 0.00 0.00 00:12:04.039 00:12:04.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.977 Nvme0n1 : 7.00 15672.14 61.22 0.00 0.00 0.00 0.00 0.00 00:12:04.977 [2024-12-05T12:43:36.503Z] =================================================================================================================== 00:12:04.977 [2024-12-05T12:43:36.503Z] Total : 15672.14 61.22 0.00 0.00 0.00 0.00 0.00 00:12:04.977 00:12:06.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.357 Nvme0n1 : 8.00 15729.25 61.44 0.00 0.00 0.00 0.00 0.00 00:12:06.357 [2024-12-05T12:43:37.883Z] =================================================================================================================== 00:12:06.357 [2024-12-05T12:43:37.883Z] Total : 15729.25 61.44 0.00 0.00 0.00 0.00 0.00 00:12:06.357 00:12:07.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.297 Nvme0n1 : 9.00 15759.56 61.56 0.00 0.00 0.00 0.00 0.00 00:12:07.297 [2024-12-05T12:43:38.823Z] =================================================================================================================== 00:12:07.297 [2024-12-05T12:43:38.823Z] Total : 15759.56 61.56 0.00 0.00 0.00 0.00 0.00 00:12:07.297 00:12:08.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:08.234 Nvme0n1 : 10.00 15796.50 61.71 0.00 0.00 0.00 0.00 0.00 00:12:08.234 [2024-12-05T12:43:39.760Z] =================================================================================================================== 00:12:08.234 [2024-12-05T12:43:39.760Z] Total : 15796.50 61.71 0.00 0.00 0.00 0.00 0.00 00:12:08.234 00:12:08.234 00:12:08.234 Latency(us) 00:12:08.234 [2024-12-05T12:43:39.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:08.234 Nvme0n1 : 10.01 15796.03 61.70 0.00 0.00 8098.67 3713.71 18932.62 00:12:08.234 [2024-12-05T12:43:39.760Z] =================================================================================================================== 00:12:08.234 [2024-12-05T12:43:39.760Z] Total : 15796.03 61.70 0.00 0.00 8098.67 3713.71 18932.62 00:12:08.234 { 00:12:08.234 "results": [ 00:12:08.234 { 00:12:08.234 "job": "Nvme0n1", 00:12:08.234 "core_mask": "0x2", 00:12:08.234 "workload": "randwrite", 00:12:08.234 "status": "finished", 00:12:08.234 "queue_depth": 128, 00:12:08.234 "io_size": 4096, 00:12:08.234 "runtime": 10.008401, 00:12:08.234 "iops": 15796.029755402486, 00:12:08.234 "mibps": 61.70324123204096, 00:12:08.234 "io_failed": 0, 00:12:08.234 "io_timeout": 0, 00:12:08.234 "avg_latency_us": 8098.674197536329, 00:12:08.234 "min_latency_us": 3713.7066666666665, 00:12:08.234 "max_latency_us": 18932.62222222222 00:12:08.234 } 00:12:08.234 ], 00:12:08.234 "core_count": 1 00:12:08.234 } 00:12:08.234 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2151503 00:12:08.234 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2151503 ']' 00:12:08.234 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2151503 00:12:08.234 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:12:08.234 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.234 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2151503 00:12:08.234 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:08.234 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:08.234 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2151503' 00:12:08.234 killing process with pid 2151503 00:12:08.234 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2151503 00:12:08.234 Received shutdown signal, test time was about 10.000000 seconds 00:12:08.234 00:12:08.234 Latency(us) 00:12:08.234 [2024-12-05T12:43:39.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.234 [2024-12-05T12:43:39.760Z] =================================================================================================================== 00:12:08.234 [2024-12-05T12:43:39.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:08.234 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2151503 00:12:08.234 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:08.802 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:09.062 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce8170fe-912b-42ca-a8c5-2230ee97dd30 00:12:09.062 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2148873 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2148873 00:12:09.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2148873 Killed "${NVMF_APP[@]}" "$@" 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2152978 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2152978 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2152978 ']' 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.322 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:09.322 [2024-12-05 13:43:40.713226] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:09.322 [2024-12-05 13:43:40.713319] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.322 [2024-12-05 13:43:40.786306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.322 [2024-12-05 13:43:40.841575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.322 [2024-12-05 13:43:40.841644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.322 [2024-12-05 13:43:40.841673] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.322 [2024-12-05 13:43:40.841685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.322 [2024-12-05 13:43:40.841695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.322 [2024-12-05 13:43:40.842313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.583 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.583 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:09.583 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.583 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.583 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:09.583 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.583 13:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:09.842 [2024-12-05 13:43:41.242279] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:09.842 [2024-12-05 13:43:41.242459] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:09.842 [2024-12-05 13:43:41.242512] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:09.842 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:09.842 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 68760dd5-ddb3-4a8e-aa3d-c804fc907025 00:12:09.842 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=68760dd5-ddb3-4a8e-aa3d-c804fc907025 00:12:09.842 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.842 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:09.842 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.842 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.842 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:10.100 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 68760dd5-ddb3-4a8e-aa3d-c804fc907025 -t 2000 00:12:10.360 [ 00:12:10.360 { 00:12:10.360 "name": "68760dd5-ddb3-4a8e-aa3d-c804fc907025", 00:12:10.360 "aliases": [ 00:12:10.360 "lvs/lvol" 00:12:10.360 ], 00:12:10.360 "product_name": "Logical Volume", 00:12:10.360 "block_size": 4096, 00:12:10.360 "num_blocks": 38912, 00:12:10.360 "uuid": "68760dd5-ddb3-4a8e-aa3d-c804fc907025", 00:12:10.360 "assigned_rate_limits": { 00:12:10.360 "rw_ios_per_sec": 0, 00:12:10.360 "rw_mbytes_per_sec": 0, 00:12:10.360 "r_mbytes_per_sec": 0, 00:12:10.360 "w_mbytes_per_sec": 0 00:12:10.360 }, 00:12:10.360 "claimed": false, 00:12:10.360 "zoned": false, 00:12:10.360 "supported_io_types": { 00:12:10.360 "read": true, 00:12:10.360 "write": true, 00:12:10.360 "unmap": true, 00:12:10.360 "flush": false, 00:12:10.360 "reset": true, 00:12:10.360 "nvme_admin": false, 00:12:10.360 "nvme_io": false, 00:12:10.360 "nvme_io_md": false, 00:12:10.360 "write_zeroes": true, 00:12:10.360 "zcopy": false, 00:12:10.360 "get_zone_info": false, 00:12:10.360 "zone_management": false, 00:12:10.360 "zone_append": false, 00:12:10.360 "compare": false, 00:12:10.360 "compare_and_write": false, 00:12:10.360 "abort": false, 00:12:10.360 "seek_hole": true, 00:12:10.360 "seek_data": true, 00:12:10.360 "copy": false, 00:12:10.360 "nvme_iov_md": false 00:12:10.360 }, 00:12:10.361 "driver_specific": { 00:12:10.361 "lvol": { 00:12:10.361 "lvol_store_uuid": "ce8170fe-912b-42ca-a8c5-2230ee97dd30", 00:12:10.361 "base_bdev": "aio_bdev", 00:12:10.361 "thin_provision": false, 00:12:10.361 "num_allocated_clusters": 38, 00:12:10.361 "snapshot": false, 00:12:10.361 "clone": false, 00:12:10.361 "esnap_clone": false 00:12:10.361 } 00:12:10.361 } 00:12:10.361 } 00:12:10.361 ] 00:12:10.361 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:10.361 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce8170fe-912b-42ca-a8c5-2230ee97dd30 00:12:10.361 13:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:10.621 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:10.621 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce8170fe-912b-42ca-a8c5-2230ee97dd30 00:12:10.621 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:10.879 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:10.880 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:11.140 [2024-12-05 13:43:42.599682] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:11.140 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce8170fe-912b-42ca-a8c5-2230ee97dd30 00:12:11.140 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:12:11.140 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce8170fe-912b-42ca-a8c5-2230ee97dd30 00:12:11.140 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.140 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.140 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.140 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.140 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.140 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.140 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.140 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:11.140 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce8170fe-912b-42ca-a8c5-2230ee97dd30 00:12:11.400 request: 00:12:11.400 { 00:12:11.400 "uuid": "ce8170fe-912b-42ca-a8c5-2230ee97dd30", 00:12:11.400 "method": "bdev_lvol_get_lvstores", 00:12:11.400 "req_id": 1 00:12:11.400 } 00:12:11.400 Got JSON-RPC error response 00:12:11.400 response: 00:12:11.400 { 00:12:11.400 "code": -19, 00:12:11.400 "message": "No such device" 00:12:11.400 } 00:12:11.400 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:12:11.400 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:11.400 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:11.400 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:11.400 13:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:11.661 aio_bdev 00:12:11.661 13:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 68760dd5-ddb3-4a8e-aa3d-c804fc907025 00:12:11.661 13:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=68760dd5-ddb3-4a8e-aa3d-c804fc907025 00:12:11.661 13:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.661 13:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:11.661 13:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.661 13:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.661 13:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:12.230 13:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 68760dd5-ddb3-4a8e-aa3d-c804fc907025 -t 2000 00:12:12.230 [ 00:12:12.230 { 00:12:12.230 "name": "68760dd5-ddb3-4a8e-aa3d-c804fc907025", 00:12:12.230 "aliases": [ 00:12:12.230 "lvs/lvol" 00:12:12.230 ], 00:12:12.230 "product_name": "Logical Volume", 00:12:12.230 "block_size": 4096, 00:12:12.230 "num_blocks": 38912, 00:12:12.230 "uuid": "68760dd5-ddb3-4a8e-aa3d-c804fc907025", 00:12:12.230 "assigned_rate_limits": { 00:12:12.230 "rw_ios_per_sec": 0, 00:12:12.230 "rw_mbytes_per_sec": 0, 00:12:12.230 "r_mbytes_per_sec": 0, 00:12:12.230 "w_mbytes_per_sec": 0 00:12:12.230 }, 00:12:12.230 "claimed": false, 00:12:12.230 "zoned": false, 00:12:12.230 "supported_io_types": { 00:12:12.230 "read": true, 00:12:12.230 "write": true, 00:12:12.230 "unmap": true, 00:12:12.230 "flush": false, 00:12:12.230 "reset": true, 00:12:12.230 "nvme_admin": false, 00:12:12.230 "nvme_io": false, 00:12:12.230 "nvme_io_md": false, 00:12:12.230 "write_zeroes": true, 00:12:12.230 "zcopy": false, 00:12:12.230 "get_zone_info": false, 00:12:12.230 "zone_management": false, 00:12:12.230 "zone_append": false, 00:12:12.230 "compare": false, 00:12:12.230 "compare_and_write": false, 00:12:12.230 "abort": false, 00:12:12.230 "seek_hole": true, 00:12:12.230 "seek_data": true, 00:12:12.230 "copy": false, 00:12:12.230 "nvme_iov_md": false 00:12:12.230 }, 00:12:12.230 "driver_specific": { 00:12:12.230 "lvol": { 00:12:12.230 "lvol_store_uuid": "ce8170fe-912b-42ca-a8c5-2230ee97dd30", 00:12:12.230 "base_bdev": "aio_bdev", 00:12:12.230 "thin_provision": false, 00:12:12.230 "num_allocated_clusters": 38, 00:12:12.230 "snapshot": false, 00:12:12.230 "clone": false, 00:12:12.230 "esnap_clone": false 00:12:12.230 } 00:12:12.230 } 00:12:12.230 } 00:12:12.230 ] 00:12:12.230 13:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:12.230 13:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce8170fe-912b-42ca-a8c5-2230ee97dd30 00:12:12.230 13:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:12.487 13:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:12.487 13:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce8170fe-912b-42ca-a8c5-2230ee97dd30 00:12:12.487 13:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:12.746 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:12.746 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 68760dd5-ddb3-4a8e-aa3d-c804fc907025 00:12:13.006 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce8170fe-912b-42ca-a8c5-2230ee97dd30 00:12:13.573 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:13.573 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:13.831 00:12:13.831 real 0m19.448s 00:12:13.831 user 0m49.177s 00:12:13.831 sys 0m4.554s 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:13.831 ************************************ 00:12:13.831 END TEST lvs_grow_dirty 00:12:13.831 ************************************ 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:13.831 nvmf_trace.0 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:13.831 rmmod nvme_tcp 00:12:13.831 rmmod nvme_fabrics 00:12:13.831 rmmod nvme_keyring 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2152978 ']' 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2152978 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2152978 ']' 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2152978 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2152978 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2152978' 00:12:13.831 killing process with pid 2152978 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2152978 00:12:13.831 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2152978 00:12:14.089 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:14.089 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:14.089 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:14.089 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:12:14.089 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:12:14.089 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:14.089 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:12:14.089 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.089 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:14.089 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.089 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.089 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.626 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:16.626 00:12:16.626 real 0m42.663s 00:12:16.626 user 1m12.429s 00:12:16.626 sys 0m8.451s 00:12:16.626 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.626 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:16.626 ************************************ 00:12:16.626 END TEST nvmf_lvs_grow 00:12:16.626 ************************************ 00:12:16.626 13:43:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:16.626 13:43:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:16.626 13:43:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.626 13:43:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:16.626 ************************************ 00:12:16.626 START TEST nvmf_bdev_io_wait 00:12:16.626 ************************************ 00:12:16.626 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:16.626 * Looking for test storage... 00:12:16.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.626 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:16.626 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:12:16.626 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:16.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.627 --rc genhtml_branch_coverage=1 00:12:16.627 --rc genhtml_function_coverage=1 00:12:16.627 --rc genhtml_legend=1 00:12:16.627 --rc geninfo_all_blocks=1 00:12:16.627 --rc geninfo_unexecuted_blocks=1 00:12:16.627 00:12:16.627 ' 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:16.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.627 --rc genhtml_branch_coverage=1 00:12:16.627 --rc genhtml_function_coverage=1 00:12:16.627 --rc genhtml_legend=1 00:12:16.627 --rc geninfo_all_blocks=1 00:12:16.627 --rc geninfo_unexecuted_blocks=1 00:12:16.627 00:12:16.627 ' 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:16.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.627 --rc genhtml_branch_coverage=1 00:12:16.627 --rc genhtml_function_coverage=1 00:12:16.627 --rc genhtml_legend=1 00:12:16.627 --rc geninfo_all_blocks=1 00:12:16.627 --rc geninfo_unexecuted_blocks=1 00:12:16.627 00:12:16.627 ' 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:16.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.627 --rc genhtml_branch_coverage=1 00:12:16.627 --rc genhtml_function_coverage=1 00:12:16.627 --rc genhtml_legend=1 00:12:16.627 --rc geninfo_all_blocks=1 00:12:16.627 --rc geninfo_unexecuted_blocks=1 00:12:16.627 00:12:16.627 ' 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:16.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:12:16.627 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:18.532 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:18.532 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:18.532 Found net devices under 0000:09:00.0: cvl_0_0 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:18.532 Found net devices under 0000:09:00.1: cvl_0_1 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.532 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.533 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.533 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:18.533 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:18.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:12:18.533 00:12:18.533 --- 10.0.0.2 ping statistics --- 00:12:18.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.533 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:12:18.533 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:12:18.533 00:12:18.533 --- 10.0.0.1 ping statistics --- 00:12:18.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.533 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:12:18.533 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.533 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:12:18.533 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:18.533 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.533 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:18.533 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:18.533 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.533 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:18.533 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:18.533 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:18.533 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.533 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.533 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:18.533 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2155515 00:12:18.533 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:18.533 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2155515 00:12:18.533 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2155515 ']' 00:12:18.533 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.533 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.533 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.533 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.533 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:18.792 [2024-12-05 13:43:50.077382] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:18.792 [2024-12-05 13:43:50.077513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.792 [2024-12-05 13:43:50.154173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.792 [2024-12-05 13:43:50.210149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.792 [2024-12-05 13:43:50.210213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.792 [2024-12-05 13:43:50.210226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.792 [2024-12-05 13:43:50.210252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.793 [2024-12-05 13:43:50.210262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.793 [2024-12-05 13:43:50.211790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.793 [2024-12-05 13:43:50.211848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.793 [2024-12-05 13:43:50.211926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.793 [2024-12-05 13:43:50.211930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.793 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.793 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:12:18.793 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:18.793 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:18.793 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:19.052 [2024-12-05 13:43:50.426685] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:19.052 Malloc0 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:19.052 [2024-12-05 13:43:50.480561] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2155546 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2155548 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:19.052 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:19.053 { 00:12:19.053 "params": { 00:12:19.053 "name": "Nvme$subsystem", 00:12:19.053 "trtype": "$TEST_TRANSPORT", 00:12:19.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:19.053 "adrfam": "ipv4", 00:12:19.053 "trsvcid": "$NVMF_PORT", 00:12:19.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:19.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:19.053 "hdgst": ${hdgst:-false}, 00:12:19.053 "ddgst": ${ddgst:-false} 00:12:19.053 }, 00:12:19.053 "method": "bdev_nvme_attach_controller" 00:12:19.053 } 00:12:19.053 EOF 00:12:19.053 )") 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2155550 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2155552 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:19.053 { 00:12:19.053 "params": { 00:12:19.053 "name": "Nvme$subsystem", 00:12:19.053 "trtype": "$TEST_TRANSPORT", 00:12:19.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:19.053 "adrfam": "ipv4", 00:12:19.053 "trsvcid": "$NVMF_PORT", 00:12:19.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:19.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:19.053 "hdgst": ${hdgst:-false}, 00:12:19.053 "ddgst": ${ddgst:-false} 00:12:19.053 }, 00:12:19.053 "method": "bdev_nvme_attach_controller" 00:12:19.053 } 00:12:19.053 EOF 00:12:19.053 )") 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:19.053 { 00:12:19.053 "params": { 00:12:19.053 "name": "Nvme$subsystem", 00:12:19.053 "trtype": "$TEST_TRANSPORT", 00:12:19.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:19.053 "adrfam": "ipv4", 00:12:19.053 "trsvcid": "$NVMF_PORT", 00:12:19.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:19.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:19.053 "hdgst": ${hdgst:-false}, 00:12:19.053 "ddgst": ${ddgst:-false} 00:12:19.053 }, 00:12:19.053 "method": "bdev_nvme_attach_controller" 00:12:19.053 } 00:12:19.053 EOF 00:12:19.053 )") 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:19.053 { 00:12:19.053 "params": { 00:12:19.053 "name": "Nvme$subsystem", 00:12:19.053 "trtype": "$TEST_TRANSPORT", 00:12:19.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:19.053 "adrfam": "ipv4", 00:12:19.053 "trsvcid": "$NVMF_PORT", 00:12:19.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:19.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:19.053 "hdgst": ${hdgst:-false}, 00:12:19.053 "ddgst": ${ddgst:-false} 00:12:19.053 }, 00:12:19.053 "method": "bdev_nvme_attach_controller" 00:12:19.053 } 00:12:19.053 EOF 00:12:19.053 )") 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2155546 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:19.053 "params": { 00:12:19.053 "name": "Nvme1", 00:12:19.053 "trtype": "tcp", 00:12:19.053 "traddr": "10.0.0.2", 00:12:19.053 "adrfam": "ipv4", 00:12:19.053 "trsvcid": "4420", 00:12:19.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:19.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:19.053 "hdgst": false, 00:12:19.053 "ddgst": false 00:12:19.053 }, 00:12:19.053 "method": "bdev_nvme_attach_controller" 00:12:19.053 }' 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:19.053 "params": { 00:12:19.053 "name": "Nvme1", 00:12:19.053 "trtype": "tcp", 00:12:19.053 "traddr": "10.0.0.2", 00:12:19.053 "adrfam": "ipv4", 00:12:19.053 "trsvcid": "4420", 00:12:19.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:19.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:19.053 "hdgst": false, 00:12:19.053 "ddgst": false 00:12:19.053 }, 00:12:19.053 "method": "bdev_nvme_attach_controller" 00:12:19.053 }' 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:19.053 "params": { 00:12:19.053 "name": "Nvme1", 00:12:19.053 "trtype": "tcp", 00:12:19.053 "traddr": "10.0.0.2", 00:12:19.053 "adrfam": "ipv4", 00:12:19.053 "trsvcid": "4420", 00:12:19.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:19.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:19.053 "hdgst": false, 00:12:19.053 "ddgst": false 00:12:19.053 }, 00:12:19.053 "method": "bdev_nvme_attach_controller" 00:12:19.053 }' 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:19.053 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:19.053 "params": { 00:12:19.053 "name": "Nvme1", 00:12:19.053 "trtype": "tcp", 00:12:19.053 "traddr": "10.0.0.2", 00:12:19.053 "adrfam": "ipv4", 00:12:19.053 "trsvcid": "4420", 00:12:19.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:19.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:19.053 "hdgst": false, 00:12:19.053 "ddgst": false 00:12:19.053 }, 00:12:19.053 "method": "bdev_nvme_attach_controller" 00:12:19.053 }' 00:12:19.053 [2024-12-05 13:43:50.531988] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:19.053 [2024-12-05 13:43:50.531988] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:19.053 [2024-12-05 13:43:50.531988] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:19.053 [2024-12-05 13:43:50.532094] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-05 13:43:50.532093] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-05 13:43:50.532094] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:19.053 --proc-type=auto ] 00:12:19.053 --proc-type=auto ] 00:12:19.053 [2024-12-05 13:43:50.532414] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:19.053 [2024-12-05 13:43:50.532498] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:19.313 [2024-12-05 13:43:50.726797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.313 [2024-12-05 13:43:50.781530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:19.313 [2024-12-05 13:43:50.828299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.573 [2024-12-05 13:43:50.882947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:12:19.573 [2024-12-05 13:43:50.927399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.573 [2024-12-05 13:43:50.982295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:19.573 [2024-12-05 13:43:50.995984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.573 [2024-12-05 13:43:51.047566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:19.833 Running I/O for 1 seconds... 00:12:19.833 Running I/O for 1 seconds... 00:12:19.833 Running I/O for 1 seconds... 00:12:19.833 Running I/O for 1 seconds... 00:12:20.774 10413.00 IOPS, 40.68 MiB/s 00:12:20.774 Latency(us) 00:12:20.774 [2024-12-05T12:43:52.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.774 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:20.774 Nvme1n1 : 1.01 10471.02 40.90 0.00 0.00 12174.45 5946.79 19903.53 00:12:20.774 [2024-12-05T12:43:52.300Z] =================================================================================================================== 00:12:20.774 [2024-12-05T12:43:52.300Z] Total : 10471.02 40.90 0.00 0.00 12174.45 5946.79 19903.53 00:12:20.774 8197.00 IOPS, 32.02 MiB/s 00:12:20.774 Latency(us) 00:12:20.774 [2024-12-05T12:43:52.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.774 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:20.774 Nvme1n1 : 1.01 8248.44 32.22 0.00 0.00 15438.15 8009.96 23398.78 00:12:20.774 [2024-12-05T12:43:52.300Z] =================================================================================================================== 00:12:20.774 [2024-12-05T12:43:52.300Z] Total : 8248.44 32.22 0.00 0.00 15438.15 8009.96 23398.78 00:12:21.033 8032.00 IOPS, 31.38 MiB/s 00:12:21.033 Latency(us) 00:12:21.033 [2024-12-05T12:43:52.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.033 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:21.033 Nvme1n1 : 1.01 8123.64 31.73 0.00 0.00 15704.39 4393.34 29127.11 00:12:21.033 [2024-12-05T12:43:52.559Z] =================================================================================================================== 00:12:21.033 [2024-12-05T12:43:52.559Z] Total : 8123.64 31.73 0.00 0.00 15704.39 4393.34 29127.11 00:12:21.033 178144.00 IOPS, 695.88 MiB/s 00:12:21.033 Latency(us) 00:12:21.033 [2024-12-05T12:43:52.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.033 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:21.033 Nvme1n1 : 1.00 177798.54 694.53 0.00 0.00 715.89 291.27 1905.40 00:12:21.033 [2024-12-05T12:43:52.559Z] =================================================================================================================== 00:12:21.033 [2024-12-05T12:43:52.559Z] Total : 177798.54 694.53 0.00 0.00 715.89 291.27 1905.40 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2155548 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2155550 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2155552 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.033 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.033 rmmod nvme_tcp 00:12:21.033 rmmod nvme_fabrics 00:12:21.033 rmmod nvme_keyring 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2155515 ']' 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2155515 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2155515 ']' 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2155515 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2155515 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2155515' 00:12:21.292 killing process with pid 2155515 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2155515 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2155515 00:12:21.292 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:21.552 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:21.552 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:21.552 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:12:21.552 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:12:21.552 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:21.552 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:12:21.552 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:21.552 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:21.552 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.552 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.552 13:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.495 13:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:23.495 00:12:23.495 real 0m7.272s 00:12:23.495 user 0m16.169s 00:12:23.495 sys 0m3.762s 00:12:23.495 13:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.495 13:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:23.495 ************************************ 00:12:23.495 END TEST nvmf_bdev_io_wait 00:12:23.495 ************************************ 00:12:23.495 13:43:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:23.495 13:43:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.495 13:43:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.495 13:43:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:23.495 ************************************ 00:12:23.495 START TEST nvmf_queue_depth 00:12:23.495 ************************************ 00:12:23.495 13:43:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:23.495 * Looking for test storage... 00:12:23.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.495 13:43:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:23.495 13:43:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:12:23.495 13:43:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:23.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.755 --rc genhtml_branch_coverage=1 00:12:23.755 --rc genhtml_function_coverage=1 00:12:23.755 --rc genhtml_legend=1 00:12:23.755 --rc geninfo_all_blocks=1 00:12:23.755 --rc geninfo_unexecuted_blocks=1 00:12:23.755 00:12:23.755 ' 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:23.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.755 --rc genhtml_branch_coverage=1 00:12:23.755 --rc genhtml_function_coverage=1 00:12:23.755 --rc genhtml_legend=1 00:12:23.755 --rc geninfo_all_blocks=1 00:12:23.755 --rc geninfo_unexecuted_blocks=1 00:12:23.755 00:12:23.755 ' 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:23.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.755 --rc genhtml_branch_coverage=1 00:12:23.755 --rc genhtml_function_coverage=1 00:12:23.755 --rc genhtml_legend=1 00:12:23.755 --rc geninfo_all_blocks=1 00:12:23.755 --rc geninfo_unexecuted_blocks=1 00:12:23.755 00:12:23.755 ' 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:23.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.755 --rc genhtml_branch_coverage=1 00:12:23.755 --rc genhtml_function_coverage=1 00:12:23.755 --rc genhtml_legend=1 00:12:23.755 --rc geninfo_all_blocks=1 00:12:23.755 --rc geninfo_unexecuted_blocks=1 00:12:23.755 00:12:23.755 ' 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.755 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.756 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:26.289 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.289 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:12:26.289 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:26.289 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:26.289 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:26.289 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:26.289 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:26.289 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:12:26.289 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:26.289 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:12:26.289 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:12:26.289 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:26.290 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:26.290 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:26.290 Found net devices under 0000:09:00.0: cvl_0_0 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:26.290 Found net devices under 0000:09:00.1: cvl_0_1 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:26.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:12:26.290 00:12:26.290 --- 10.0.0.2 ping statistics --- 00:12:26.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.290 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:12:26.290 00:12:26.290 --- 10.0.0.1 ping statistics --- 00:12:26.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.290 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.290 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2157820 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2157820 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2157820 ']' 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:26.291 [2024-12-05 13:43:57.468655] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:26.291 [2024-12-05 13:43:57.468757] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.291 [2024-12-05 13:43:57.545499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.291 [2024-12-05 13:43:57.596736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.291 [2024-12-05 13:43:57.596793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.291 [2024-12-05 13:43:57.596820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.291 [2024-12-05 13:43:57.596831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.291 [2024-12-05 13:43:57.596840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.291 [2024-12-05 13:43:57.597414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:26.291 [2024-12-05 13:43:57.738520] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:26.291 Malloc0 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:26.291 [2024-12-05 13:43:57.786864] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2157927 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2157927 /var/tmp/bdevperf.sock 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2157927 ']' 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:26.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.291 13:43:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:26.550 [2024-12-05 13:43:57.834826] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:26.550 [2024-12-05 13:43:57.834900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2157927 ] 00:12:26.550 [2024-12-05 13:43:57.898915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.550 [2024-12-05 13:43:57.953370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.810 13:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.810 13:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:26.810 13:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:26.810 13:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.810 13:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:27.069 NVMe0n1 00:12:27.069 13:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.069 13:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:27.069 Running I/O for 10 seconds... 00:12:28.977 8204.00 IOPS, 32.05 MiB/s [2024-12-05T12:44:01.882Z] 8389.00 IOPS, 32.77 MiB/s [2024-12-05T12:44:02.816Z] 8533.33 IOPS, 33.33 MiB/s [2024-12-05T12:44:03.753Z] 8655.00 IOPS, 33.81 MiB/s [2024-12-05T12:44:04.693Z] 8631.20 IOPS, 33.72 MiB/s [2024-12-05T12:44:05.636Z] 8693.67 IOPS, 33.96 MiB/s [2024-12-05T12:44:06.576Z] 8721.71 IOPS, 34.07 MiB/s [2024-12-05T12:44:07.518Z] 8717.25 IOPS, 34.05 MiB/s [2024-12-05T12:44:08.898Z] 8751.22 IOPS, 34.18 MiB/s [2024-12-05T12:44:08.898Z] 8789.80 IOPS, 34.34 MiB/s 00:12:37.372 Latency(us) 00:12:37.372 [2024-12-05T12:44:08.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.372 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:37.372 Verification LBA range: start 0x0 length 0x4000 00:12:37.372 NVMe0n1 : 10.09 8810.34 34.42 0.00 0.00 115772.68 20971.52 71458.51 00:12:37.372 [2024-12-05T12:44:08.898Z] =================================================================================================================== 00:12:37.372 [2024-12-05T12:44:08.898Z] Total : 8810.34 34.42 0.00 0.00 115772.68 20971.52 71458.51 00:12:37.372 { 00:12:37.372 "results": [ 00:12:37.372 { 00:12:37.372 "job": "NVMe0n1", 00:12:37.372 "core_mask": "0x1", 00:12:37.372 "workload": "verify", 00:12:37.372 "status": "finished", 00:12:37.372 "verify_range": { 00:12:37.372 "start": 0, 00:12:37.372 "length": 16384 00:12:37.372 }, 00:12:37.372 "queue_depth": 1024, 00:12:37.372 "io_size": 4096, 00:12:37.372 "runtime": 10.091555, 00:12:37.372 "iops": 8810.336959963059, 00:12:37.372 "mibps": 34.4153787498557, 00:12:37.372 "io_failed": 0, 00:12:37.372 "io_timeout": 0, 00:12:37.372 "avg_latency_us": 115772.6848598791, 00:12:37.372 "min_latency_us": 20971.52, 00:12:37.372 "max_latency_us": 71458.5125925926 00:12:37.372 } 00:12:37.372 ], 00:12:37.372 "core_count": 1 00:12:37.372 } 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2157927 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2157927 ']' 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2157927 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2157927 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2157927' 00:12:37.372 killing process with pid 2157927 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2157927 00:12:37.372 Received shutdown signal, test time was about 10.000000 seconds 00:12:37.372 00:12:37.372 Latency(us) 00:12:37.372 [2024-12-05T12:44:08.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.372 [2024-12-05T12:44:08.898Z] =================================================================================================================== 00:12:37.372 [2024-12-05T12:44:08.898Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2157927 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.372 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.372 rmmod nvme_tcp 00:12:37.372 rmmod nvme_fabrics 00:12:37.635 rmmod nvme_keyring 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2157820 ']' 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2157820 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2157820 ']' 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2157820 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2157820 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2157820' 00:12:37.635 killing process with pid 2157820 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2157820 00:12:37.635 13:44:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2157820 00:12:37.895 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.895 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.895 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.895 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:37.895 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:12:37.895 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.895 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.895 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.895 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.895 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.895 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.895 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.804 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:39.804 00:12:39.804 real 0m16.340s 00:12:39.804 user 0m22.994s 00:12:39.804 sys 0m3.138s 00:12:39.804 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.804 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.804 ************************************ 00:12:39.804 END TEST nvmf_queue_depth 00:12:39.804 ************************************ 00:12:39.804 13:44:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:39.804 13:44:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.804 13:44:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.804 13:44:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:39.804 ************************************ 00:12:39.804 START TEST nvmf_target_multipath 00:12:39.804 ************************************ 00:12:39.804 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:40.063 * Looking for test storage... 00:12:40.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:40.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.063 --rc genhtml_branch_coverage=1 00:12:40.063 --rc genhtml_function_coverage=1 00:12:40.063 --rc genhtml_legend=1 00:12:40.063 --rc geninfo_all_blocks=1 00:12:40.063 --rc geninfo_unexecuted_blocks=1 00:12:40.063 00:12:40.063 ' 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:40.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.063 --rc genhtml_branch_coverage=1 00:12:40.063 --rc genhtml_function_coverage=1 00:12:40.063 --rc genhtml_legend=1 00:12:40.063 --rc geninfo_all_blocks=1 00:12:40.063 --rc geninfo_unexecuted_blocks=1 00:12:40.063 00:12:40.063 ' 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:40.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.063 --rc genhtml_branch_coverage=1 00:12:40.063 --rc genhtml_function_coverage=1 00:12:40.063 --rc genhtml_legend=1 00:12:40.063 --rc geninfo_all_blocks=1 00:12:40.063 --rc geninfo_unexecuted_blocks=1 00:12:40.063 00:12:40.063 ' 00:12:40.063 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:40.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.063 --rc genhtml_branch_coverage=1 00:12:40.063 --rc genhtml_function_coverage=1 00:12:40.063 --rc genhtml_legend=1 00:12:40.063 --rc geninfo_all_blocks=1 00:12:40.063 --rc geninfo_unexecuted_blocks=1 00:12:40.063 00:12:40.063 ' 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:40.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:12:40.064 13:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.600 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:42.601 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:42.601 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:42.601 Found net devices under 0000:09:00.0: cvl_0_0 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:42.601 Found net devices under 0000:09:00.1: cvl_0_1 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:42.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:12:42.601 00:12:42.601 --- 10.0.0.2 ping statistics --- 00:12:42.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.601 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:12:42.601 00:12:42.601 --- 10.0.0.1 ping statistics --- 00:12:42.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.601 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:42.601 only one NIC for nvmf test 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.601 rmmod nvme_tcp 00:12:42.601 rmmod nvme_fabrics 00:12:42.601 rmmod nvme_keyring 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:42.601 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:42.602 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:42.602 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.602 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:42.602 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:42.602 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.602 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.602 13:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:44.510 00:12:44.510 real 0m4.685s 00:12:44.510 user 0m1.011s 00:12:44.510 sys 0m1.691s 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.510 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:44.510 ************************************ 00:12:44.510 END TEST nvmf_target_multipath 00:12:44.510 ************************************ 00:12:44.510 13:44:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:44.510 13:44:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:44.510 13:44:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.510 13:44:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:44.769 ************************************ 00:12:44.769 START TEST nvmf_zcopy 00:12:44.769 ************************************ 00:12:44.769 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:44.769 * Looking for test storage... 00:12:44.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.769 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:44.769 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:12:44.769 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:44.769 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:44.769 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:44.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.770 --rc genhtml_branch_coverage=1 00:12:44.770 --rc genhtml_function_coverage=1 00:12:44.770 --rc genhtml_legend=1 00:12:44.770 --rc geninfo_all_blocks=1 00:12:44.770 --rc geninfo_unexecuted_blocks=1 00:12:44.770 00:12:44.770 ' 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:44.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.770 --rc genhtml_branch_coverage=1 00:12:44.770 --rc genhtml_function_coverage=1 00:12:44.770 --rc genhtml_legend=1 00:12:44.770 --rc geninfo_all_blocks=1 00:12:44.770 --rc geninfo_unexecuted_blocks=1 00:12:44.770 00:12:44.770 ' 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:44.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.770 --rc genhtml_branch_coverage=1 00:12:44.770 --rc genhtml_function_coverage=1 00:12:44.770 --rc genhtml_legend=1 00:12:44.770 --rc geninfo_all_blocks=1 00:12:44.770 --rc geninfo_unexecuted_blocks=1 00:12:44.770 00:12:44.770 ' 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:44.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.770 --rc genhtml_branch_coverage=1 00:12:44.770 --rc genhtml_function_coverage=1 00:12:44.770 --rc genhtml_legend=1 00:12:44.770 --rc geninfo_all_blocks=1 00:12:44.770 --rc geninfo_unexecuted_blocks=1 00:12:44.770 00:12:44.770 ' 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.770 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:44.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:12:44.771 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.339 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.339 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:12:47.339 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:47.339 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:47.339 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:47.339 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:47.339 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:47.340 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:47.340 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:47.340 Found net devices under 0000:09:00.0: cvl_0_0 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:47.340 Found net devices under 0000:09:00.1: cvl_0_1 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:47.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:12:47.340 00:12:47.340 --- 10.0.0.2 ping statistics --- 00:12:47.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.340 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:12:47.340 00:12:47.340 --- 10.0.0.1 ping statistics --- 00:12:47.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.340 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:47.340 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2163144 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2163144 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2163144 ']' 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.341 [2024-12-05 13:44:18.609988] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:47.341 [2024-12-05 13:44:18.610084] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.341 [2024-12-05 13:44:18.679844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.341 [2024-12-05 13:44:18.730370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.341 [2024-12-05 13:44:18.730433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.341 [2024-12-05 13:44:18.730464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.341 [2024-12-05 13:44:18.730475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.341 [2024-12-05 13:44:18.730484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.341 [2024-12-05 13:44:18.731068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:47.341 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.617 [2024-12-05 13:44:18.870953] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.617 [2024-12-05 13:44:18.887146] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.617 malloc0 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:47.617 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.618 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.618 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.618 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:47.618 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:47.618 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:47.618 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:47.618 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:47.618 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:47.618 { 00:12:47.618 "params": { 00:12:47.618 "name": "Nvme$subsystem", 00:12:47.618 "trtype": "$TEST_TRANSPORT", 00:12:47.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:47.618 "adrfam": "ipv4", 00:12:47.618 "trsvcid": "$NVMF_PORT", 00:12:47.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:47.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:47.618 "hdgst": ${hdgst:-false}, 00:12:47.618 "ddgst": ${ddgst:-false} 00:12:47.618 }, 00:12:47.618 "method": "bdev_nvme_attach_controller" 00:12:47.618 } 00:12:47.618 EOF 00:12:47.618 )") 00:12:47.618 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:47.618 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:47.618 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:47.618 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:47.618 "params": { 00:12:47.618 "name": "Nvme1", 00:12:47.618 "trtype": "tcp", 00:12:47.618 "traddr": "10.0.0.2", 00:12:47.618 "adrfam": "ipv4", 00:12:47.618 "trsvcid": "4420", 00:12:47.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.618 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:47.618 "hdgst": false, 00:12:47.618 "ddgst": false 00:12:47.618 }, 00:12:47.618 "method": "bdev_nvme_attach_controller" 00:12:47.618 }' 00:12:47.618 [2024-12-05 13:44:18.970902] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:47.618 [2024-12-05 13:44:18.970976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163170 ] 00:12:47.618 [2024-12-05 13:44:19.035876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.618 [2024-12-05 13:44:19.093570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.876 Running I/O for 10 seconds... 00:12:50.194 5701.00 IOPS, 44.54 MiB/s [2024-12-05T12:44:22.661Z] 5774.50 IOPS, 45.11 MiB/s [2024-12-05T12:44:23.598Z] 5797.00 IOPS, 45.29 MiB/s [2024-12-05T12:44:24.537Z] 5802.50 IOPS, 45.33 MiB/s [2024-12-05T12:44:25.474Z] 5802.20 IOPS, 45.33 MiB/s [2024-12-05T12:44:26.412Z] 5812.50 IOPS, 45.41 MiB/s [2024-12-05T12:44:27.351Z] 5818.00 IOPS, 45.45 MiB/s [2024-12-05T12:44:28.729Z] 5825.50 IOPS, 45.51 MiB/s [2024-12-05T12:44:29.667Z] 5830.33 IOPS, 45.55 MiB/s [2024-12-05T12:44:29.667Z] 5838.20 IOPS, 45.61 MiB/s 00:12:58.141 Latency(us) 00:12:58.141 [2024-12-05T12:44:29.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.141 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:58.141 Verification LBA range: start 0x0 length 0x1000 00:12:58.141 Nvme1n1 : 10.01 5838.86 45.62 0.00 0.00 21863.12 3640.89 30292.20 00:12:58.141 [2024-12-05T12:44:29.667Z] =================================================================================================================== 00:12:58.141 [2024-12-05T12:44:29.667Z] Total : 5838.86 45.62 0.00 0.00 21863.12 3640.89 30292.20 00:12:58.141 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2164484 00:12:58.141 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:58.141 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.141 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:58.141 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:58.141 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:58.141 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:58.141 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:58.141 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:58.141 { 00:12:58.141 "params": { 00:12:58.141 "name": "Nvme$subsystem", 00:12:58.141 "trtype": "$TEST_TRANSPORT", 00:12:58.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:58.141 "adrfam": "ipv4", 00:12:58.141 "trsvcid": "$NVMF_PORT", 00:12:58.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:58.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:58.141 "hdgst": ${hdgst:-false}, 00:12:58.141 "ddgst": ${ddgst:-false} 00:12:58.141 }, 00:12:58.141 "method": "bdev_nvme_attach_controller" 00:12:58.141 } 00:12:58.141 EOF 00:12:58.141 )") 00:12:58.141 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:58.141 [2024-12-05 13:44:29.536601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.141 [2024-12-05 13:44:29.536646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.141 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:58.141 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:58.141 13:44:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:58.141 "params": { 00:12:58.141 "name": "Nvme1", 00:12:58.141 "trtype": "tcp", 00:12:58.141 "traddr": "10.0.0.2", 00:12:58.141 "adrfam": "ipv4", 00:12:58.141 "trsvcid": "4420", 00:12:58.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:58.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:58.141 "hdgst": false, 00:12:58.141 "ddgst": false 00:12:58.141 }, 00:12:58.141 "method": "bdev_nvme_attach_controller" 00:12:58.141 }' 00:12:58.141 [2024-12-05 13:44:29.544535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.141 [2024-12-05 13:44:29.544560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.141 [2024-12-05 13:44:29.552553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.141 [2024-12-05 13:44:29.552575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.141 [2024-12-05 13:44:29.560575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.141 [2024-12-05 13:44:29.560596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.141 [2024-12-05 13:44:29.568595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.141 [2024-12-05 13:44:29.568616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.142 [2024-12-05 13:44:29.576099] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:58.142 [2024-12-05 13:44:29.576166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2164484 ] 00:12:58.142 [2024-12-05 13:44:29.576617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.142 [2024-12-05 13:44:29.576639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.142 [2024-12-05 13:44:29.584655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.142 [2024-12-05 13:44:29.584676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.142 [2024-12-05 13:44:29.592672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.142 [2024-12-05 13:44:29.592693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.142 [2024-12-05 13:44:29.600710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.142 [2024-12-05 13:44:29.600730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.142 [2024-12-05 13:44:29.608732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.142 [2024-12-05 13:44:29.608753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.142 [2024-12-05 13:44:29.616752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.142 [2024-12-05 13:44:29.616772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.142 [2024-12-05 13:44:29.624787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.142 [2024-12-05 13:44:29.624807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.142 [2024-12-05 13:44:29.632811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.142 [2024-12-05 13:44:29.632833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.142 [2024-12-05 13:44:29.640829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.142 [2024-12-05 13:44:29.640850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.142 [2024-12-05 13:44:29.644917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.142 [2024-12-05 13:44:29.648843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.142 [2024-12-05 13:44:29.648862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.142 [2024-12-05 13:44:29.656904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.142 [2024-12-05 13:44:29.656939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.664899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.664927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.672898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.672928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.680925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.680947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.688956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.688978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.696969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.696999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.705008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.705029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.705572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.403 [2024-12-05 13:44:29.713012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.713032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.721062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.721093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.729103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.729140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.737128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.737166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.745152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.745192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.753162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.753197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.761188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.761225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.769177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.769198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.777203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.777230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.785255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.785293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.793275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.793314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.801252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.801273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.809273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.809293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.817294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.817313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.825324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.825349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.833345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.833369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.841367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.841388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.849388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.849434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.857428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.857449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.865450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.403 [2024-12-05 13:44:29.865494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.403 [2024-12-05 13:44:29.873475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.404 [2024-12-05 13:44:29.873495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.404 [2024-12-05 13:44:29.881495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.404 [2024-12-05 13:44:29.881516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.404 [2024-12-05 13:44:29.889548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.404 [2024-12-05 13:44:29.889571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.404 [2024-12-05 13:44:29.897555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.404 [2024-12-05 13:44:29.897578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.404 [2024-12-05 13:44:29.905573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.404 [2024-12-05 13:44:29.905594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.404 [2024-12-05 13:44:29.913594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.404 [2024-12-05 13:44:29.913615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.404 [2024-12-05 13:44:29.921614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.404 [2024-12-05 13:44:29.921635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:29.929660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:29.929682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:29.937663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:29.937685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:29.945686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:29.945722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:29.953721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:29.953741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:29.961742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:29.961762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:29.969765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:29.969784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:29.977796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:29.977816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:29.985818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:29.985839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:29.993821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:29.993841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.001908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.001942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.009895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.009922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 Running I/O for 5 seconds... 00:12:58.666 [2024-12-05 13:44:30.017905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.017928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.031801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.031834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.043206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.043238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.054070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.054099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.064848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.064876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.075764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.075792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.086597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.086625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.100281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.100308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.111109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.111137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.122123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.122152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.135414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.135451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.145851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.145878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.157153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.157180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.167955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.167992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.666 [2024-12-05 13:44:30.178814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.666 [2024-12-05 13:44:30.178843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.926 [2024-12-05 13:44:30.191518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.926 [2024-12-05 13:44:30.191547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.926 [2024-12-05 13:44:30.202061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.926 [2024-12-05 13:44:30.202087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.926 [2024-12-05 13:44:30.212764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.926 [2024-12-05 13:44:30.212792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.926 [2024-12-05 13:44:30.225362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.926 [2024-12-05 13:44:30.225389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.926 [2024-12-05 13:44:30.235540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.926 [2024-12-05 13:44:30.235567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.926 [2024-12-05 13:44:30.246149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.926 [2024-12-05 13:44:30.246178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.926 [2024-12-05 13:44:30.256796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.926 [2024-12-05 13:44:30.256824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.926 [2024-12-05 13:44:30.267544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.926 [2024-12-05 13:44:30.267571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.926 [2024-12-05 13:44:30.278204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.926 [2024-12-05 13:44:30.278231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.926 [2024-12-05 13:44:30.288998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.926 [2024-12-05 13:44:30.289025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.926 [2024-12-05 13:44:30.301587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.926 [2024-12-05 13:44:30.301615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.926 [2024-12-05 13:44:30.311856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.926 [2024-12-05 13:44:30.311883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.926 [2024-12-05 13:44:30.322096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.926 [2024-12-05 13:44:30.322124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.926 [2024-12-05 13:44:30.332773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.927 [2024-12-05 13:44:30.332801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.927 [2024-12-05 13:44:30.343466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.927 [2024-12-05 13:44:30.343494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.927 [2024-12-05 13:44:30.354446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.927 [2024-12-05 13:44:30.354473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.927 [2024-12-05 13:44:30.367587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.927 [2024-12-05 13:44:30.367614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.927 [2024-12-05 13:44:30.377708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.927 [2024-12-05 13:44:30.377749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.927 [2024-12-05 13:44:30.388495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.927 [2024-12-05 13:44:30.388523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.927 [2024-12-05 13:44:30.399216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.927 [2024-12-05 13:44:30.399244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.927 [2024-12-05 13:44:30.409822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.927 [2024-12-05 13:44:30.409850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.927 [2024-12-05 13:44:30.420203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.927 [2024-12-05 13:44:30.420231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.927 [2024-12-05 13:44:30.430996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.927 [2024-12-05 13:44:30.431024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.927 [2024-12-05 13:44:30.443503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.927 [2024-12-05 13:44:30.443531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.453889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.453918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.464798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.464826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.475397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.475436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.486152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.486180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.496947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.496975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.508069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.508098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.520467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.520494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.530342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.530370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.541361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.541388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.552989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.553017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.565738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.565766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.576112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.576140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.586691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.586737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.597412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.597448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.607770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.607798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.618200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.618227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.628879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.628907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.639864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.639891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.650364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.650391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.661209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.661237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.672521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.672548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.683334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.188 [2024-12-05 13:44:30.683362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.188 [2024-12-05 13:44:30.693828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.189 [2024-12-05 13:44:30.693856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.189 [2024-12-05 13:44:30.707444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.189 [2024-12-05 13:44:30.707471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.448 [2024-12-05 13:44:30.717635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.448 [2024-12-05 13:44:30.717664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.728049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.728077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.738581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.738608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.748967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.748995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.759222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.759250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.770538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.770566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.781194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.781221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.791910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.791960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.804002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.804031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.814163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.814191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.825185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.825213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.837720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.837747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.847864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.847892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.858341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.858370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.868498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.868526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.879448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.879476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.891819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.891847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.901414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.901449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.911945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.911972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.922314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.922341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.933206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.933249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.943765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.943793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.956934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.956962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.449 [2024-12-05 13:44:30.967357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.449 [2024-12-05 13:44:30.967384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:30.977801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:30.977829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:30.988372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:30.988399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:30.998889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:30.998926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.009322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.009350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 11788.00 IOPS, 92.09 MiB/s [2024-12-05T12:44:31.235Z] [2024-12-05 13:44:31.019797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.019824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.032292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.032320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.041707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.041735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.054685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.054712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.064881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.064908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.075902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.075929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.086702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.086730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.097192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.097220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.109846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.109874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.120046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.120073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.130205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.130232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.140276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.140304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.150901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.150928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.161606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.161634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.172501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.172528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.183222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.183249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.194050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.194078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.204785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.204812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.215813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.215840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.709 [2024-12-05 13:44:31.226829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.709 [2024-12-05 13:44:31.226856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.968 [2024-12-05 13:44:31.239992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.968 [2024-12-05 13:44:31.240021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.968 [2024-12-05 13:44:31.250263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.968 [2024-12-05 13:44:31.250290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.968 [2024-12-05 13:44:31.261237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.968 [2024-12-05 13:44:31.261265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.968 [2024-12-05 13:44:31.272102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.968 [2024-12-05 13:44:31.272129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.968 [2024-12-05 13:44:31.283170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.968 [2024-12-05 13:44:31.283198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.968 [2024-12-05 13:44:31.293836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.968 [2024-12-05 13:44:31.293864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.968 [2024-12-05 13:44:31.304642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.968 [2024-12-05 13:44:31.304669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.968 [2024-12-05 13:44:31.315579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.968 [2024-12-05 13:44:31.315606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.968 [2024-12-05 13:44:31.325984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.968 [2024-12-05 13:44:31.326012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.968 [2024-12-05 13:44:31.336759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.968 [2024-12-05 13:44:31.336786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.968 [2024-12-05 13:44:31.347358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.968 [2024-12-05 13:44:31.347385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.969 [2024-12-05 13:44:31.358298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.969 [2024-12-05 13:44:31.358326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.969 [2024-12-05 13:44:31.369290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.969 [2024-12-05 13:44:31.369317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.969 [2024-12-05 13:44:31.380263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.969 [2024-12-05 13:44:31.380291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.969 [2024-12-05 13:44:31.391251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.969 [2024-12-05 13:44:31.391278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.969 [2024-12-05 13:44:31.403848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.969 [2024-12-05 13:44:31.403876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.969 [2024-12-05 13:44:31.415497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.969 [2024-12-05 13:44:31.415525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.969 [2024-12-05 13:44:31.424549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.969 [2024-12-05 13:44:31.424576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.969 [2024-12-05 13:44:31.436214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.969 [2024-12-05 13:44:31.436241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.969 [2024-12-05 13:44:31.446694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.969 [2024-12-05 13:44:31.446722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.969 [2024-12-05 13:44:31.457500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.969 [2024-12-05 13:44:31.457528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.969 [2024-12-05 13:44:31.470257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.969 [2024-12-05 13:44:31.470285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.969 [2024-12-05 13:44:31.480481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.969 [2024-12-05 13:44:31.480509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.969 [2024-12-05 13:44:31.491047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.969 [2024-12-05 13:44:31.491075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.228 [2024-12-05 13:44:31.501827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.228 [2024-12-05 13:44:31.501855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.228 [2024-12-05 13:44:31.512480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.228 [2024-12-05 13:44:31.512507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.228 [2024-12-05 13:44:31.523207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.228 [2024-12-05 13:44:31.523234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.228 [2024-12-05 13:44:31.535601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.228 [2024-12-05 13:44:31.535629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.228 [2024-12-05 13:44:31.545321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.228 [2024-12-05 13:44:31.545349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.228 [2024-12-05 13:44:31.556090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.228 [2024-12-05 13:44:31.556118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.228 [2024-12-05 13:44:31.567118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.228 [2024-12-05 13:44:31.567145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.228 [2024-12-05 13:44:31.577736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.228 [2024-12-05 13:44:31.577763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.228 [2024-12-05 13:44:31.590018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.228 [2024-12-05 13:44:31.590047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.229 [2024-12-05 13:44:31.599840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.229 [2024-12-05 13:44:31.599868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.229 [2024-12-05 13:44:31.610882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.229 [2024-12-05 13:44:31.610917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.229 [2024-12-05 13:44:31.621642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.229 [2024-12-05 13:44:31.621670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.229 [2024-12-05 13:44:31.632442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.229 [2024-12-05 13:44:31.632470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.229 [2024-12-05 13:44:31.643315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.229 [2024-12-05 13:44:31.643344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.229 [2024-12-05 13:44:31.653903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.229 [2024-12-05 13:44:31.653931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.229 [2024-12-05 13:44:31.666696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.229 [2024-12-05 13:44:31.666724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.229 [2024-12-05 13:44:31.677138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.229 [2024-12-05 13:44:31.677167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.229 [2024-12-05 13:44:31.688518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.229 [2024-12-05 13:44:31.688546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.229 [2024-12-05 13:44:31.701803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.229 [2024-12-05 13:44:31.701830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.229 [2024-12-05 13:44:31.712258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.229 [2024-12-05 13:44:31.712286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.229 [2024-12-05 13:44:31.722968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.229 [2024-12-05 13:44:31.722996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.229 [2024-12-05 13:44:31.736136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.229 [2024-12-05 13:44:31.736164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.229 [2024-12-05 13:44:31.746687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.229 [2024-12-05 13:44:31.746715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.757552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.757580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.770123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.770152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.780263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.780290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.791209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.791237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.801561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.801589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.812248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.812276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.822572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.822608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.833158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.833185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.843804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.843831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.854213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.854240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.865162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.865189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.875781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.875808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.886584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.886612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.899376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.899403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.909058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.909086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.488 [2024-12-05 13:44:31.919657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.488 [2024-12-05 13:44:31.919684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.489 [2024-12-05 13:44:31.930279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.489 [2024-12-05 13:44:31.930306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.489 [2024-12-05 13:44:31.942535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.489 [2024-12-05 13:44:31.942563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.489 [2024-12-05 13:44:31.952539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.489 [2024-12-05 13:44:31.952567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.489 [2024-12-05 13:44:31.963261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.489 [2024-12-05 13:44:31.963288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.489 [2024-12-05 13:44:31.973706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.489 [2024-12-05 13:44:31.973733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.489 [2024-12-05 13:44:31.984601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.489 [2024-12-05 13:44:31.984629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.489 [2024-12-05 13:44:31.997226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.489 [2024-12-05 13:44:31.997254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.489 [2024-12-05 13:44:32.007361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.489 [2024-12-05 13:44:32.007389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.017766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.017795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 11831.00 IOPS, 92.43 MiB/s [2024-12-05T12:44:32.275Z] [2024-12-05 13:44:32.027915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.027951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.038660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.038688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.049230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.049258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.060012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.060040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.073402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.073438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.084117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.084145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.094970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.094999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.105449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.105476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.115932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.115960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.126243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.126271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.136221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.136249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.146609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.146636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.156797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.156824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.167034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.167063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.177464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.177491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.187873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.187899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.198186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.198213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.208637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.208665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.749 [2024-12-05 13:44:32.219325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.749 [2024-12-05 13:44:32.219354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.750 [2024-12-05 13:44:32.229664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.750 [2024-12-05 13:44:32.229691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.750 [2024-12-05 13:44:32.240107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.750 [2024-12-05 13:44:32.240150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.750 [2024-12-05 13:44:32.250433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.750 [2024-12-05 13:44:32.250461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.750 [2024-12-05 13:44:32.260822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.750 [2024-12-05 13:44:32.260850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.750 [2024-12-05 13:44:32.271411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.750 [2024-12-05 13:44:32.271448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.282547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.282576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.293231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.293259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.305696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.305724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.315766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.315793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.326873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.326901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.339202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.339230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.349570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.349597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.360634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.360661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.373850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.373878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.384167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.384194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.394601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.394628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.405413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.405448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.417975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.418003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.427689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.427716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.439055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.439082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.449639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.449666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.460473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.460500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.472885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.472913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.483024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.483052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.010 [2024-12-05 13:44:32.493717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.010 [2024-12-05 13:44:32.493744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.011 [2024-12-05 13:44:32.504704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.011 [2024-12-05 13:44:32.504731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.011 [2024-12-05 13:44:32.515446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.011 [2024-12-05 13:44:32.515474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.011 [2024-12-05 13:44:32.528102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.011 [2024-12-05 13:44:32.528130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.538344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.538372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.549126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.549154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.561910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.561938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.572242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.572270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.583043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.583071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.593912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.593940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.604671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.604698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.617677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.617705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.628083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.628111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.638652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.638680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.649277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.649305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.660107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.660134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.673327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.673355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.683593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.683620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.694592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.694619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.707081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.707108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.717447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.717475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.728318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.728347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.740625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.740653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.749817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.749846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.761556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.761584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.772639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.772667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.270 [2024-12-05 13:44:32.783352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.270 [2024-12-05 13:44:32.783380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.530 [2024-12-05 13:44:32.796106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.530 [2024-12-05 13:44:32.796134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.530 [2024-12-05 13:44:32.806513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.806541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.817783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.817811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.828645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.828674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.839378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.839406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.851783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.851811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.863803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.863832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.873594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.873621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.884093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.884120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.894261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.894305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.905082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.905109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.917689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.917717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.927873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.927900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.938532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.938560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.951118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.951146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.961415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.961453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.971884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.971911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.982663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.982690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:32.995309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:32.995336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:33.005914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:33.005942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:33.016653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:33.016680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 11857.33 IOPS, 92.64 MiB/s [2024-12-05T12:44:33.057Z] [2024-12-05 13:44:33.029407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:33.029444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:33.039541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:33.039568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.531 [2024-12-05 13:44:33.049847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.531 [2024-12-05 13:44:33.049875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.060407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.060451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.071211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.071239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.083950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.083977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.094046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.094074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.104764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.104792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.115519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.115546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.126550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.126577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.138761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.138788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.148564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.148591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.159194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.159221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.169972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.170000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.182361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.182389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.192516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.192544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.203103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.203130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.213970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.213997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.224218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.224246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.234488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.234515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.245795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.245823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.258285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.258314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.268470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.268515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.279102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.279130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.289852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.289880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.302004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.302032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.792 [2024-12-05 13:44:33.311968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.792 [2024-12-05 13:44:33.311996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.322641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.322669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.333434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.333463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.343999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.344026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.354850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.354878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.367604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.367632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.377783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.377811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.388303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.388331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.398854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.398882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.409632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.409660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.419983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.420010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.430494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.430522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.441484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.441512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.453955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.453983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.463561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.463588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.474455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.474495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.487166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.487194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.499366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.499394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.508323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.508350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.519512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.519540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.531817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.531845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.541479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.541507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.552483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.552509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.563336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.563364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.053 [2024-12-05 13:44:33.574332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.053 [2024-12-05 13:44:33.574359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.585348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.585377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.596218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.596245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.606626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.606653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.617386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.617413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.628040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.628068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.640859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.640887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.652454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.652481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.661624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.661653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.672751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.672779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.685843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.685871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.695973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.696000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.706649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.706677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.716973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.717001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.727343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.727370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.737948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.737977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.748634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.748662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.759142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.759170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.770059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.770087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.780637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.780664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.313 [2024-12-05 13:44:33.791490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.313 [2024-12-05 13:44:33.791527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.314 [2024-12-05 13:44:33.804082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.314 [2024-12-05 13:44:33.804110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.314 [2024-12-05 13:44:33.814095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.314 [2024-12-05 13:44:33.814123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.314 [2024-12-05 13:44:33.824730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.314 [2024-12-05 13:44:33.824757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.837169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.837224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.847519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.847547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.857998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.858027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.868614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.868643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.879085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.879114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.890100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.890129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.901301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.901329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.914152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.914180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.924463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.924492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.935041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.935069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.948440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.948468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.958475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.958503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.969436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.969475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.981724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.981752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:33.991452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:33.991479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:34.003024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:34.003052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:34.013800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:34.013828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 11882.25 IOPS, 92.83 MiB/s [2024-12-05T12:44:34.099Z] [2024-12-05 13:44:34.027127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:34.027155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:34.037577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:34.037605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:34.048396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:34.048432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:34.061709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:34.061736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:34.072152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:34.072180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:34.082968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:34.082995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.573 [2024-12-05 13:44:34.095296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.573 [2024-12-05 13:44:34.095331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.105340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.105368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.115717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.115745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.126739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.126767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.137937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.137965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.148288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.148315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.159119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.159147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.169966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.169993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.181001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.181028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.193578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.193606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.203980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.204008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.214834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.214862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.227510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.227537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.237609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.237637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.247810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.247838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.257863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.257891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.268133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.268160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.279069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.279096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.292067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.292095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.302161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.302197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.312890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.312918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.323980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.324007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.334842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.334870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.832 [2024-12-05 13:44:34.347545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.832 [2024-12-05 13:44:34.347573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.093 [2024-12-05 13:44:34.357601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.093 [2024-12-05 13:44:34.357629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.093 [2024-12-05 13:44:34.368146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.093 [2024-12-05 13:44:34.368175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.093 [2024-12-05 13:44:34.378908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.093 [2024-12-05 13:44:34.378936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.093 [2024-12-05 13:44:34.389357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.093 [2024-12-05 13:44:34.389384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.093 [2024-12-05 13:44:34.399932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.093 [2024-12-05 13:44:34.399959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.093 [2024-12-05 13:44:34.410713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.093 [2024-12-05 13:44:34.410741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.093 [2024-12-05 13:44:34.423653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.093 [2024-12-05 13:44:34.423682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.093 [2024-12-05 13:44:34.433608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.093 [2024-12-05 13:44:34.433636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.093 [2024-12-05 13:44:34.444203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.093 [2024-12-05 13:44:34.444231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.093 [2024-12-05 13:44:34.456228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.456256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.094 [2024-12-05 13:44:34.466407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.466444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.094 [2024-12-05 13:44:34.476757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.476785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.094 [2024-12-05 13:44:34.487483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.487511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.094 [2024-12-05 13:44:34.499860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.499888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.094 [2024-12-05 13:44:34.508847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.508883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.094 [2024-12-05 13:44:34.520504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.520531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.094 [2024-12-05 13:44:34.531034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.531062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.094 [2024-12-05 13:44:34.541989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.542016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.094 [2024-12-05 13:44:34.554367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.554394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.094 [2024-12-05 13:44:34.564491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.564518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.094 [2024-12-05 13:44:34.575266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.575293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.094 [2024-12-05 13:44:34.585790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.585818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.094 [2024-12-05 13:44:34.596247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.596275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.094 [2024-12-05 13:44:34.607003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.094 [2024-12-05 13:44:34.607030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.354 [2024-12-05 13:44:34.619735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.354 [2024-12-05 13:44:34.619763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.354 [2024-12-05 13:44:34.630032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.354 [2024-12-05 13:44:34.630059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.354 [2024-12-05 13:44:34.640413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.354 [2024-12-05 13:44:34.640448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.354 [2024-12-05 13:44:34.650961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.354 [2024-12-05 13:44:34.650988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.354 [2024-12-05 13:44:34.663443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.354 [2024-12-05 13:44:34.663470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.354 [2024-12-05 13:44:34.673450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.354 [2024-12-05 13:44:34.673478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.354 [2024-12-05 13:44:34.684038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.354 [2024-12-05 13:44:34.684065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.354 [2024-12-05 13:44:34.694705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.354 [2024-12-05 13:44:34.694733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.354 [2024-12-05 13:44:34.705531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.354 [2024-12-05 13:44:34.705558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.354 [2024-12-05 13:44:34.716242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.354 [2024-12-05 13:44:34.716278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.354 [2024-12-05 13:44:34.726563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.354 [2024-12-05 13:44:34.726590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.354 [2024-12-05 13:44:34.746841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.354 [2024-12-05 13:44:34.746871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.355 [2024-12-05 13:44:34.757559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.355 [2024-12-05 13:44:34.757586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.355 [2024-12-05 13:44:34.768375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.355 [2024-12-05 13:44:34.768403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.355 [2024-12-05 13:44:34.781450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.355 [2024-12-05 13:44:34.781477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.355 [2024-12-05 13:44:34.791856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.355 [2024-12-05 13:44:34.791898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.355 [2024-12-05 13:44:34.802746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.355 [2024-12-05 13:44:34.802774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.355 [2024-12-05 13:44:34.813770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.355 [2024-12-05 13:44:34.813798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.355 [2024-12-05 13:44:34.824865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.355 [2024-12-05 13:44:34.824892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.355 [2024-12-05 13:44:34.837613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.355 [2024-12-05 13:44:34.837641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.355 [2024-12-05 13:44:34.848017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.355 [2024-12-05 13:44:34.848045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.355 [2024-12-05 13:44:34.858693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.355 [2024-12-05 13:44:34.858721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.355 [2024-12-05 13:44:34.869536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.355 [2024-12-05 13:44:34.869564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:34.880333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:34.880361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:34.893109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:34.893137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:34.902895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:34.902922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:34.913442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:34.913469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:34.924361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:34.924388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:34.935027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:34.935064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:34.947961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:34.947988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:34.958185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:34.958213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:34.968769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:34.968797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:34.980074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:34.980103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:34.990948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:34.990976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.001900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.001929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.012407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.012450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.023586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.023613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 11884.00 IOPS, 92.84 MiB/s [2024-12-05T12:44:35.141Z] [2024-12-05 13:44:35.033386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.033414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 00:13:03.615 Latency(us) 00:13:03.615 [2024-12-05T12:44:35.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.615 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:03.615 Nvme1n1 : 5.01 11885.43 92.85 0.00 0.00 10755.26 4830.25 23495.87 00:13:03.615 [2024-12-05T12:44:35.141Z] =================================================================================================================== 00:13:03.615 [2024-12-05T12:44:35.141Z] Total : 11885.43 92.85 0.00 0.00 10755.26 4830.25 23495.87 00:13:03.615 [2024-12-05 13:44:35.037893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.037917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.045937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.045979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.053923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.053944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.062017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.062061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.070048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.070096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.078073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.078129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.086080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.086126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.094121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.094170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.102134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.102181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.110146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.110190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.118174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.118222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.126195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.126240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.615 [2024-12-05 13:44:35.138273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.615 [2024-12-05 13:44:35.138326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.146252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.146297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.154269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.154316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.162297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.162342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.170311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.170356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.178327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.178370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.186297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.186322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.194306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.194327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.202330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.202351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.210350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.210369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.218407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.218457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.226472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.226515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.234499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.234543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.242460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.242480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.250478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.250498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 [2024-12-05 13:44:35.258506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.873 [2024-12-05 13:44:35.258527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2164484) - No such process 00:13:03.873 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2164484 00:13:03.873 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.873 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.873 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:03.873 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.873 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:03.873 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.873 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:03.873 delay0 00:13:03.873 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.873 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:03.873 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.873 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:03.873 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.873 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:03.873 [2024-12-05 13:44:35.379208] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:10.445 [2024-12-05 13:44:41.477272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c400 is same with the state(6) to be set 00:13:10.445 [2024-12-05 13:44:41.477327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c400 is same with the state(6) to be set 00:13:10.445 Initializing NVMe Controllers 00:13:10.445 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:10.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:10.445 Initialization complete. Launching workers. 00:13:10.445 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 53 00:13:10.445 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 340, failed to submit 33 00:13:10.445 success 155, unsuccessful 185, failed 0 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:10.445 rmmod nvme_tcp 00:13:10.445 rmmod nvme_fabrics 00:13:10.445 rmmod nvme_keyring 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2163144 ']' 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2163144 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2163144 ']' 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2163144 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2163144 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2163144' 00:13:10.445 killing process with pid 2163144 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2163144 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2163144 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.445 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.396 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:12.396 00:13:12.396 real 0m27.820s 00:13:12.396 user 0m40.910s 00:13:12.396 sys 0m8.227s 00:13:12.397 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.397 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:12.397 ************************************ 00:13:12.397 END TEST nvmf_zcopy 00:13:12.397 ************************************ 00:13:12.397 13:44:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:12.397 13:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:12.397 13:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.397 13:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:12.656 ************************************ 00:13:12.656 START TEST nvmf_nmic 00:13:12.656 ************************************ 00:13:12.656 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:12.656 * Looking for test storage... 00:13:12.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:12.656 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:12.656 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:13:12.656 13:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:12.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.656 --rc genhtml_branch_coverage=1 00:13:12.656 --rc genhtml_function_coverage=1 00:13:12.656 --rc genhtml_legend=1 00:13:12.656 --rc geninfo_all_blocks=1 00:13:12.656 --rc geninfo_unexecuted_blocks=1 00:13:12.656 00:13:12.656 ' 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:12.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.656 --rc genhtml_branch_coverage=1 00:13:12.656 --rc genhtml_function_coverage=1 00:13:12.656 --rc genhtml_legend=1 00:13:12.656 --rc geninfo_all_blocks=1 00:13:12.656 --rc geninfo_unexecuted_blocks=1 00:13:12.656 00:13:12.656 ' 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:12.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.656 --rc genhtml_branch_coverage=1 00:13:12.656 --rc genhtml_function_coverage=1 00:13:12.656 --rc genhtml_legend=1 00:13:12.656 --rc geninfo_all_blocks=1 00:13:12.656 --rc geninfo_unexecuted_blocks=1 00:13:12.656 00:13:12.656 ' 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:12.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.656 --rc genhtml_branch_coverage=1 00:13:12.656 --rc genhtml_function_coverage=1 00:13:12.656 --rc genhtml_legend=1 00:13:12.656 --rc geninfo_all_blocks=1 00:13:12.656 --rc geninfo_unexecuted_blocks=1 00:13:12.656 00:13:12.656 ' 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.656 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:12.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:13:12.657 13:44:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:15.197 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:15.197 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:15.197 Found net devices under 0000:09:00.0: cvl_0_0 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:15.197 Found net devices under 0000:09:00.1: cvl_0_1 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.197 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:13:15.198 00:13:15.198 --- 10.0.0.2 ping statistics --- 00:13:15.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.198 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:13:15.198 00:13:15.198 --- 10.0.0.1 ping statistics --- 00:13:15.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.198 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2167848 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2167848 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2167848 ']' 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.198 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.198 [2024-12-05 13:44:46.495122] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:15.198 [2024-12-05 13:44:46.495198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.198 [2024-12-05 13:44:46.570094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.198 [2024-12-05 13:44:46.632160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.198 [2024-12-05 13:44:46.632215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.198 [2024-12-05 13:44:46.632246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.198 [2024-12-05 13:44:46.632258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.198 [2024-12-05 13:44:46.632269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.198 [2024-12-05 13:44:46.633973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.198 [2024-12-05 13:44:46.634031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.198 [2024-12-05 13:44:46.634097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.198 [2024-12-05 13:44:46.634101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 [2024-12-05 13:44:46.793828] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 Malloc0 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 [2024-12-05 13:44:46.867875] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:15.457 test case1: single bdev can't be used in multiple subsystems 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.457 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.458 [2024-12-05 13:44:46.891719] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:15.458 [2024-12-05 13:44:46.891749] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:15.458 [2024-12-05 13:44:46.891779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.458 request: 00:13:15.458 { 00:13:15.458 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:15.458 "namespace": { 00:13:15.458 "bdev_name": "Malloc0", 00:13:15.458 "no_auto_visible": false, 00:13:15.458 "hide_metadata": false 00:13:15.458 }, 00:13:15.458 "method": "nvmf_subsystem_add_ns", 00:13:15.458 "req_id": 1 00:13:15.458 } 00:13:15.458 Got JSON-RPC error response 00:13:15.458 response: 00:13:15.458 { 00:13:15.458 "code": -32602, 00:13:15.458 "message": "Invalid parameters" 00:13:15.458 } 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:15.458 Adding namespace failed - expected result. 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:15.458 test case2: host connect to nvmf target in multiple paths 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.458 [2024-12-05 13:44:46.903856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.458 13:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.028 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:16.594 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.594 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:13:16.594 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.594 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:16.594 13:44:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:13:19.132 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:19.132 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:19.132 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.132 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:19.132 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.132 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:13:19.132 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:19.132 [global] 00:13:19.132 thread=1 00:13:19.132 invalidate=1 00:13:19.132 rw=write 00:13:19.132 time_based=1 00:13:19.132 runtime=1 00:13:19.132 ioengine=libaio 00:13:19.132 direct=1 00:13:19.132 bs=4096 00:13:19.132 iodepth=1 00:13:19.132 norandommap=0 00:13:19.132 numjobs=1 00:13:19.132 00:13:19.132 verify_dump=1 00:13:19.132 verify_backlog=512 00:13:19.132 verify_state_save=0 00:13:19.132 do_verify=1 00:13:19.132 verify=crc32c-intel 00:13:19.132 [job0] 00:13:19.132 filename=/dev/nvme0n1 00:13:19.132 Could not set queue depth (nvme0n1) 00:13:19.132 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:19.132 fio-3.35 00:13:19.132 Starting 1 thread 00:13:20.076 00:13:20.076 job0: (groupid=0, jobs=1): err= 0: pid=2168422: Thu Dec 5 13:44:51 2024 00:13:20.076 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:13:20.076 slat (nsec): min=9985, max=35066, avg=30118.86, stdev=7688.18 00:13:20.076 clat (usec): min=40898, max=41951, avg=41018.13, stdev=220.81 00:13:20.076 lat (usec): min=40931, max=41973, avg=41048.25, stdev=217.89 00:13:20.076 clat percentiles (usec): 00:13:20.076 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:20.076 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:20.076 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:20.076 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:20.076 | 99.99th=[42206] 00:13:20.076 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:13:20.076 slat (usec): min=7, max=24919, avg=64.14, stdev=1100.62 00:13:20.076 clat (usec): min=132, max=342, avg=196.28, stdev=36.20 00:13:20.076 lat (usec): min=142, max=25105, avg=260.42, stdev=1100.83 00:13:20.076 clat percentiles (usec): 00:13:20.076 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 153], 20.00th=[ 161], 00:13:20.076 | 30.00th=[ 169], 40.00th=[ 178], 50.00th=[ 190], 60.00th=[ 210], 00:13:20.076 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 245], 95.00th=[ 251], 00:13:20.076 | 99.00th=[ 262], 99.50th=[ 265], 99.90th=[ 343], 99.95th=[ 343], 00:13:20.076 | 99.99th=[ 343] 00:13:20.076 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:20.076 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:20.076 lat (usec) : 250=90.81%, 500=5.25% 00:13:20.076 lat (msec) : 50=3.94% 00:13:20.076 cpu : usr=0.50%, sys=1.10%, ctx=537, majf=0, minf=1 00:13:20.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:20.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.076 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:20.076 00:13:20.076 Run status group 0 (all jobs): 00:13:20.076 READ: bw=83.9KiB/s (85.9kB/s), 83.9KiB/s-83.9KiB/s (85.9kB/s-85.9kB/s), io=84.0KiB (86.0kB), run=1001-1001msec 00:13:20.076 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:13:20.076 00:13:20.076 Disk stats (read/write): 00:13:20.076 nvme0n1: ios=44/512, merge=0/0, ticks=1724/97, in_queue=1821, util=98.30% 00:13:20.076 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:20.336 rmmod nvme_tcp 00:13:20.336 rmmod nvme_fabrics 00:13:20.336 rmmod nvme_keyring 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2167848 ']' 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2167848 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2167848 ']' 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2167848 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2167848 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2167848' 00:13:20.336 killing process with pid 2167848 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2167848 00:13:20.336 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2167848 00:13:20.597 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:20.597 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:20.597 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:20.597 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:13:20.597 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:13:20.597 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:20.597 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:13:20.597 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:20.597 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:20.597 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.597 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.597 13:44:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.504 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:22.504 00:13:22.504 real 0m10.090s 00:13:22.504 user 0m22.345s 00:13:22.504 sys 0m2.525s 00:13:22.504 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.504 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:22.504 ************************************ 00:13:22.504 END TEST nvmf_nmic 00:13:22.504 ************************************ 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:22.764 ************************************ 00:13:22.764 START TEST nvmf_fio_target 00:13:22.764 ************************************ 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:22.764 * Looking for test storage... 00:13:22.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.764 --rc genhtml_branch_coverage=1 00:13:22.764 --rc genhtml_function_coverage=1 00:13:22.764 --rc genhtml_legend=1 00:13:22.764 --rc geninfo_all_blocks=1 00:13:22.764 --rc geninfo_unexecuted_blocks=1 00:13:22.764 00:13:22.764 ' 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.764 --rc genhtml_branch_coverage=1 00:13:22.764 --rc genhtml_function_coverage=1 00:13:22.764 --rc genhtml_legend=1 00:13:22.764 --rc geninfo_all_blocks=1 00:13:22.764 --rc geninfo_unexecuted_blocks=1 00:13:22.764 00:13:22.764 ' 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.764 --rc genhtml_branch_coverage=1 00:13:22.764 --rc genhtml_function_coverage=1 00:13:22.764 --rc genhtml_legend=1 00:13:22.764 --rc geninfo_all_blocks=1 00:13:22.764 --rc geninfo_unexecuted_blocks=1 00:13:22.764 00:13:22.764 ' 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.764 --rc genhtml_branch_coverage=1 00:13:22.764 --rc genhtml_function_coverage=1 00:13:22.764 --rc genhtml_legend=1 00:13:22.764 --rc geninfo_all_blocks=1 00:13:22.764 --rc geninfo_unexecuted_blocks=1 00:13:22.764 00:13:22.764 ' 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.764 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:22.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:22.765 13:44:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:25.296 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:25.296 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:25.296 Found net devices under 0000:09:00.0: cvl_0_0 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:25.296 Found net devices under 0000:09:00.1: cvl_0_1 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:25.296 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:25.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:13:25.297 00:13:25.297 --- 10.0.0.2 ping statistics --- 00:13:25.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.297 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:13:25.297 00:13:25.297 --- 10.0.0.1 ping statistics --- 00:13:25.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.297 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2170509 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2170509 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2170509 ']' 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.297 [2024-12-05 13:44:56.567607] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:25.297 [2024-12-05 13:44:56.567688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.297 [2024-12-05 13:44:56.638745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.297 [2024-12-05 13:44:56.695749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.297 [2024-12-05 13:44:56.695800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.297 [2024-12-05 13:44:56.695828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.297 [2024-12-05 13:44:56.695840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.297 [2024-12-05 13:44:56.695850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.297 [2024-12-05 13:44:56.697429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.297 [2024-12-05 13:44:56.697490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.297 [2024-12-05 13:44:56.697543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.297 [2024-12-05 13:44:56.697546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:25.297 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.555 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.555 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:25.812 [2024-12-05 13:44:57.144453] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.812 13:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:26.070 13:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:26.070 13:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:26.327 13:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:26.327 13:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:26.585 13:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:26.585 13:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:27.154 13:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:27.154 13:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:27.154 13:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:27.724 13:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:27.724 13:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:27.724 13:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:27.984 13:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:28.243 13:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:28.243 13:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:28.501 13:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:28.759 13:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:28.759 13:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:29.017 13:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:29.017 13:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:29.275 13:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.533 [2024-12-05 13:45:00.916469] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.533 13:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:29.790 13:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:30.048 13:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.985 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:30.985 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:13:30.985 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.985 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:13:30.985 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:13:30.985 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:13:32.892 13:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:32.892 13:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:32.892 13:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.892 13:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:13:32.892 13:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.892 13:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:13:32.892 13:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:32.892 [global] 00:13:32.892 thread=1 00:13:32.892 invalidate=1 00:13:32.892 rw=write 00:13:32.892 time_based=1 00:13:32.892 runtime=1 00:13:32.892 ioengine=libaio 00:13:32.892 direct=1 00:13:32.892 bs=4096 00:13:32.892 iodepth=1 00:13:32.892 norandommap=0 00:13:32.892 numjobs=1 00:13:32.892 00:13:32.892 verify_dump=1 00:13:32.892 verify_backlog=512 00:13:32.892 verify_state_save=0 00:13:32.892 do_verify=1 00:13:32.892 verify=crc32c-intel 00:13:32.892 [job0] 00:13:32.892 filename=/dev/nvme0n1 00:13:32.892 [job1] 00:13:32.892 filename=/dev/nvme0n2 00:13:32.892 [job2] 00:13:32.892 filename=/dev/nvme0n3 00:13:32.892 [job3] 00:13:32.892 filename=/dev/nvme0n4 00:13:32.892 Could not set queue depth (nvme0n1) 00:13:32.892 Could not set queue depth (nvme0n2) 00:13:32.892 Could not set queue depth (nvme0n3) 00:13:32.892 Could not set queue depth (nvme0n4) 00:13:33.151 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.151 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.151 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.151 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.151 fio-3.35 00:13:33.151 Starting 4 threads 00:13:34.542 00:13:34.542 job0: (groupid=0, jobs=1): err= 0: pid=2171712: Thu Dec 5 13:45:05 2024 00:13:34.542 read: IOPS=23, BW=92.1KiB/s (94.3kB/s)(96.0KiB/1042msec) 00:13:34.542 slat (nsec): min=9776, max=36596, avg=16242.83, stdev=6436.55 00:13:34.542 clat (usec): min=309, max=41976, avg=39344.40, stdev=8319.60 00:13:34.542 lat (usec): min=323, max=41990, avg=39360.65, stdev=8320.36 00:13:34.542 clat percentiles (usec): 00:13:34.542 | 1.00th=[ 310], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:34.542 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:34.542 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:13:34.542 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:34.542 | 99.99th=[42206] 00:13:34.542 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:13:34.542 slat (nsec): min=8397, max=45657, avg=12395.77, stdev=4215.97 00:13:34.542 clat (usec): min=143, max=258, avg=173.59, stdev=15.03 00:13:34.542 lat (usec): min=152, max=267, avg=185.98, stdev=16.19 00:13:34.542 clat percentiles (usec): 00:13:34.542 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:13:34.542 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:13:34.542 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 202], 00:13:34.542 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 260], 99.95th=[ 260], 00:13:34.542 | 99.99th=[ 260] 00:13:34.542 bw ( KiB/s): min= 4096, max= 4096, per=21.67%, avg=4096.00, stdev= 0.00, samples=1 00:13:34.542 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:34.542 lat (usec) : 250=95.15%, 500=0.56% 00:13:34.542 lat (msec) : 50=4.29% 00:13:34.542 cpu : usr=0.38%, sys=0.77%, ctx=537, majf=0, minf=1 00:13:34.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.542 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.542 job1: (groupid=0, jobs=1): err= 0: pid=2171713: Thu Dec 5 13:45:05 2024 00:13:34.542 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:13:34.542 slat (nsec): min=8298, max=17497, avg=13908.55, stdev=1851.83 00:13:34.542 clat (usec): min=40327, max=41102, avg=40953.98, stdev=150.15 00:13:34.542 lat (usec): min=40335, max=41116, avg=40967.89, stdev=151.41 00:13:34.542 clat percentiles (usec): 00:13:34.542 | 1.00th=[40109], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:34.542 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:34.542 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:34.542 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:34.542 | 99.99th=[41157] 00:13:34.542 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:13:34.542 slat (nsec): min=6350, max=47203, avg=11652.27, stdev=6089.59 00:13:34.542 clat (usec): min=133, max=1028, avg=236.30, stdev=93.76 00:13:34.542 lat (usec): min=142, max=1040, avg=247.96, stdev=95.43 00:13:34.542 clat percentiles (usec): 00:13:34.542 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 169], 00:13:34.542 | 30.00th=[ 186], 40.00th=[ 229], 50.00th=[ 241], 60.00th=[ 243], 00:13:34.542 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 318], 95.00th=[ 343], 00:13:34.542 | 99.00th=[ 807], 99.50th=[ 898], 99.90th=[ 1029], 99.95th=[ 1029], 00:13:34.542 | 99.99th=[ 1029] 00:13:34.542 bw ( KiB/s): min= 4096, max= 4096, per=21.67%, avg=4096.00, stdev= 0.00, samples=1 00:13:34.542 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:34.542 lat (usec) : 250=74.34%, 500=20.04%, 750=0.37%, 1000=0.94% 00:13:34.542 lat (msec) : 2=0.19%, 50=4.12% 00:13:34.542 cpu : usr=0.29%, sys=0.58%, ctx=535, majf=0, minf=1 00:13:34.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.542 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.542 job2: (groupid=0, jobs=1): err= 0: pid=2171714: Thu Dec 5 13:45:05 2024 00:13:34.542 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:13:34.542 slat (nsec): min=4683, max=64734, avg=12585.73, stdev=7603.42 00:13:34.542 clat (usec): min=179, max=677, avg=272.53, stdev=99.00 00:13:34.542 lat (usec): min=185, max=712, avg=285.11, stdev=102.46 00:13:34.542 clat percentiles (usec): 00:13:34.543 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:13:34.543 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 227], 00:13:34.543 | 70.00th=[ 293], 80.00th=[ 375], 90.00th=[ 433], 95.00th=[ 478], 00:13:34.543 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 668], 99.95th=[ 668], 00:13:34.543 | 99.99th=[ 676] 00:13:34.543 write: IOPS=2055, BW=8224KiB/s (8421kB/s)(8232KiB/1001msec); 0 zone resets 00:13:34.543 slat (nsec): min=6417, max=37434, avg=12512.65, stdev=5459.00 00:13:34.543 clat (usec): min=132, max=526, avg=183.01, stdev=38.08 00:13:34.543 lat (usec): min=139, max=536, avg=195.52, stdev=38.47 00:13:34.543 clat percentiles (usec): 00:13:34.543 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 155], 00:13:34.543 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:13:34.543 | 70.00th=[ 190], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 255], 00:13:34.543 | 99.00th=[ 285], 99.50th=[ 314], 99.90th=[ 441], 99.95th=[ 490], 00:13:34.543 | 99.99th=[ 529] 00:13:34.543 bw ( KiB/s): min= 9512, max= 9512, per=50.33%, avg=9512.00, stdev= 0.00, samples=1 00:13:34.543 iops : min= 2378, max= 2378, avg=2378.00, stdev= 0.00, samples=1 00:13:34.543 lat (usec) : 250=79.08%, 500=19.12%, 750=1.80% 00:13:34.543 cpu : usr=3.00%, sys=4.90%, ctx=4109, majf=0, minf=1 00:13:34.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.543 issued rwts: total=2048,2058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.543 job3: (groupid=0, jobs=1): err= 0: pid=2171715: Thu Dec 5 13:45:05 2024 00:13:34.543 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:34.543 slat (nsec): min=5082, max=45008, avg=14673.53, stdev=7434.84 00:13:34.543 clat (usec): min=202, max=644, avg=317.97, stdev=68.23 00:13:34.543 lat (usec): min=219, max=673, avg=332.64, stdev=70.04 00:13:34.543 clat percentiles (usec): 00:13:34.543 | 1.00th=[ 231], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 269], 00:13:34.543 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 306], 00:13:34.543 | 70.00th=[ 334], 80.00th=[ 367], 90.00th=[ 416], 95.00th=[ 469], 00:13:34.543 | 99.00th=[ 545], 99.50th=[ 578], 99.90th=[ 627], 99.95th=[ 644], 00:13:34.543 | 99.99th=[ 644] 00:13:34.543 write: IOPS=1839, BW=7357KiB/s (7533kB/s)(7364KiB/1001msec); 0 zone resets 00:13:34.543 slat (usec): min=6, max=39761, avg=37.01, stdev=926.37 00:13:34.543 clat (usec): min=138, max=916, avg=221.29, stdev=57.54 00:13:34.543 lat (usec): min=147, max=40112, avg=258.29, stdev=931.24 00:13:34.543 clat percentiles (usec): 00:13:34.543 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 167], 20.00th=[ 186], 00:13:34.543 | 30.00th=[ 198], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 223], 00:13:34.543 | 70.00th=[ 231], 80.00th=[ 247], 90.00th=[ 273], 95.00th=[ 318], 00:13:34.543 | 99.00th=[ 375], 99.50th=[ 404], 99.90th=[ 914], 99.95th=[ 914], 00:13:34.543 | 99.99th=[ 914] 00:13:34.543 bw ( KiB/s): min= 8192, max= 8192, per=43.35%, avg=8192.00, stdev= 0.00, samples=1 00:13:34.543 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:34.543 lat (usec) : 250=46.96%, 500=51.76%, 750=1.13%, 1000=0.15% 00:13:34.543 cpu : usr=3.30%, sys=6.20%, ctx=3379, majf=0, minf=1 00:13:34.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.543 issued rwts: total=1536,1841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.543 00:13:34.543 Run status group 0 (all jobs): 00:13:34.543 READ: bw=13.6MiB/s (14.3MB/s), 85.4KiB/s-8184KiB/s (87.5kB/s-8380kB/s), io=14.2MiB (14.9MB), run=1001-1042msec 00:13:34.543 WRITE: bw=18.5MiB/s (19.4MB/s), 1965KiB/s-8224KiB/s (2013kB/s-8421kB/s), io=19.2MiB (20.2MB), run=1001-1042msec 00:13:34.543 00:13:34.543 Disk stats (read/write): 00:13:34.543 nvme0n1: ios=61/512, merge=0/0, ticks=766/89, in_queue=855, util=87.17% 00:13:34.543 nvme0n2: ios=59/512, merge=0/0, ticks=763/117, in_queue=880, util=91.26% 00:13:34.543 nvme0n3: ios=1560/1864, merge=0/0, ticks=1331/344, in_queue=1675, util=93.54% 00:13:34.543 nvme0n4: ios=1314/1536, merge=0/0, ticks=769/333, in_queue=1102, util=95.49% 00:13:34.543 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:34.543 [global] 00:13:34.543 thread=1 00:13:34.543 invalidate=1 00:13:34.543 rw=randwrite 00:13:34.543 time_based=1 00:13:34.543 runtime=1 00:13:34.543 ioengine=libaio 00:13:34.543 direct=1 00:13:34.543 bs=4096 00:13:34.543 iodepth=1 00:13:34.543 norandommap=0 00:13:34.543 numjobs=1 00:13:34.543 00:13:34.543 verify_dump=1 00:13:34.543 verify_backlog=512 00:13:34.543 verify_state_save=0 00:13:34.543 do_verify=1 00:13:34.543 verify=crc32c-intel 00:13:34.543 [job0] 00:13:34.543 filename=/dev/nvme0n1 00:13:34.543 [job1] 00:13:34.543 filename=/dev/nvme0n2 00:13:34.543 [job2] 00:13:34.543 filename=/dev/nvme0n3 00:13:34.543 [job3] 00:13:34.543 filename=/dev/nvme0n4 00:13:34.543 Could not set queue depth (nvme0n1) 00:13:34.543 Could not set queue depth (nvme0n2) 00:13:34.543 Could not set queue depth (nvme0n3) 00:13:34.543 Could not set queue depth (nvme0n4) 00:13:34.543 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.543 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.543 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.543 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.543 fio-3.35 00:13:34.543 Starting 4 threads 00:13:35.951 00:13:35.951 job0: (groupid=0, jobs=1): err= 0: pid=2172057: Thu Dec 5 13:45:07 2024 00:13:35.951 read: IOPS=1317, BW=5271KiB/s (5397kB/s)(5276KiB/1001msec) 00:13:35.951 slat (nsec): min=5615, max=65874, avg=13308.92, stdev=6558.01 00:13:35.951 clat (usec): min=176, max=41050, avg=460.42, stdev=2740.71 00:13:35.951 lat (usec): min=184, max=41064, avg=473.73, stdev=2740.84 00:13:35.951 clat percentiles (usec): 00:13:35.951 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 200], 00:13:35.951 | 30.00th=[ 208], 40.00th=[ 223], 50.00th=[ 262], 60.00th=[ 297], 00:13:35.951 | 70.00th=[ 314], 80.00th=[ 334], 90.00th=[ 383], 95.00th=[ 445], 00:13:35.951 | 99.00th=[ 578], 99.50th=[ 758], 99.90th=[41157], 99.95th=[41157], 00:13:35.951 | 99.99th=[41157] 00:13:35.951 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:35.951 slat (nsec): min=8618, max=81616, avg=18989.27, stdev=9107.23 00:13:35.951 clat (usec): min=121, max=485, avg=216.68, stdev=57.19 00:13:35.951 lat (usec): min=130, max=516, avg=235.67, stdev=61.61 00:13:35.951 clat percentiles (usec): 00:13:35.951 | 1.00th=[ 139], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 178], 00:13:35.951 | 30.00th=[ 186], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:13:35.951 | 70.00th=[ 221], 80.00th=[ 255], 90.00th=[ 293], 95.00th=[ 343], 00:13:35.951 | 99.00th=[ 416], 99.50th=[ 441], 99.90th=[ 478], 99.95th=[ 486], 00:13:35.951 | 99.99th=[ 486] 00:13:35.951 bw ( KiB/s): min= 8192, max= 8192, per=51.60%, avg=8192.00, stdev= 0.00, samples=1 00:13:35.951 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:35.951 lat (usec) : 250=64.27%, 500=34.22%, 750=1.26%, 1000=0.04% 00:13:35.951 lat (msec) : 50=0.21% 00:13:35.951 cpu : usr=3.70%, sys=5.70%, ctx=2857, majf=0, minf=1 00:13:35.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.951 issued rwts: total=1319,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:35.951 job1: (groupid=0, jobs=1): err= 0: pid=2172058: Thu Dec 5 13:45:07 2024 00:13:35.951 read: IOPS=64, BW=260KiB/s (266kB/s)(268KiB/1032msec) 00:13:35.951 slat (nsec): min=8874, max=34524, avg=14471.49, stdev=7777.97 00:13:35.951 clat (usec): min=242, max=42032, avg=13843.01, stdev=19494.83 00:13:35.951 lat (usec): min=254, max=42050, avg=13857.48, stdev=19498.32 00:13:35.951 clat percentiles (usec): 00:13:35.951 | 1.00th=[ 243], 5.00th=[ 285], 10.00th=[ 302], 20.00th=[ 310], 00:13:35.951 | 30.00th=[ 314], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 338], 00:13:35.951 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:13:35.951 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:35.951 | 99.99th=[42206] 00:13:35.951 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:13:35.951 slat (nsec): min=6584, max=34093, avg=13779.91, stdev=5318.87 00:13:35.951 clat (usec): min=148, max=356, avg=183.13, stdev=19.71 00:13:35.951 lat (usec): min=156, max=365, avg=196.91, stdev=20.43 00:13:35.951 clat percentiles (usec): 00:13:35.951 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:13:35.951 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 186], 00:13:35.951 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 217], 00:13:35.951 | 99.00th=[ 233], 99.50th=[ 269], 99.90th=[ 359], 99.95th=[ 359], 00:13:35.951 | 99.99th=[ 359] 00:13:35.951 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:13:35.951 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:35.951 lat (usec) : 250=87.91%, 500=8.29% 00:13:35.951 lat (msec) : 50=3.80% 00:13:35.951 cpu : usr=0.39%, sys=0.68%, ctx=580, majf=0, minf=1 00:13:35.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.951 issued rwts: total=67,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:35.951 job2: (groupid=0, jobs=1): err= 0: pid=2172059: Thu Dec 5 13:45:07 2024 00:13:35.951 read: IOPS=475, BW=1902KiB/s (1948kB/s)(1940KiB/1020msec) 00:13:35.951 slat (nsec): min=8636, max=36235, avg=10454.72, stdev=3742.80 00:13:35.951 clat (usec): min=206, max=41084, avg=1774.89, stdev=7699.39 00:13:35.951 lat (usec): min=215, max=41101, avg=1785.34, stdev=7701.57 00:13:35.951 clat percentiles (usec): 00:13:35.951 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:13:35.951 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 262], 00:13:35.951 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 363], 00:13:35.951 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:35.951 | 99.99th=[41157] 00:13:35.951 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:13:35.951 slat (nsec): min=7919, max=80860, avg=24988.39, stdev=12161.21 00:13:35.951 clat (usec): min=152, max=447, avg=265.40, stdev=63.42 00:13:35.951 lat (usec): min=179, max=466, avg=290.39, stdev=62.58 00:13:35.951 clat percentiles (usec): 00:13:35.951 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 184], 20.00th=[ 208], 00:13:35.951 | 30.00th=[ 225], 40.00th=[ 241], 50.00th=[ 260], 60.00th=[ 277], 00:13:35.951 | 70.00th=[ 297], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 367], 00:13:35.951 | 99.00th=[ 412], 99.50th=[ 424], 99.90th=[ 449], 99.95th=[ 449], 00:13:35.951 | 99.99th=[ 449] 00:13:35.951 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:13:35.951 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:35.951 lat (usec) : 250=48.45%, 500=49.75% 00:13:35.951 lat (msec) : 50=1.81% 00:13:35.951 cpu : usr=1.18%, sys=1.67%, ctx=998, majf=0, minf=1 00:13:35.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.951 issued rwts: total=485,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:35.951 job3: (groupid=0, jobs=1): err= 0: pid=2172060: Thu Dec 5 13:45:07 2024 00:13:35.951 read: IOPS=1185, BW=4743KiB/s (4857kB/s)(4748KiB/1001msec) 00:13:35.951 slat (nsec): min=7010, max=60687, avg=13509.09, stdev=6139.67 00:13:35.951 clat (usec): min=203, max=41317, avg=504.50, stdev=2888.54 00:13:35.951 lat (usec): min=212, max=41332, avg=518.01, stdev=2888.58 00:13:35.951 clat percentiles (usec): 00:13:35.951 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 249], 00:13:35.951 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 302], 00:13:35.951 | 70.00th=[ 318], 80.00th=[ 338], 90.00th=[ 367], 95.00th=[ 437], 00:13:35.951 | 99.00th=[ 586], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:13:35.951 | 99.99th=[41157] 00:13:35.951 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:35.951 slat (nsec): min=6645, max=78026, avg=18389.42, stdev=8589.30 00:13:35.951 clat (usec): min=143, max=504, avg=224.04, stdev=64.81 00:13:35.951 lat (usec): min=152, max=535, avg=242.43, stdev=69.44 00:13:35.951 clat percentiles (usec): 00:13:35.951 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 182], 00:13:35.951 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 208], 00:13:35.951 | 70.00th=[ 221], 80.00th=[ 262], 90.00th=[ 326], 95.00th=[ 379], 00:13:35.951 | 99.00th=[ 445], 99.50th=[ 474], 99.90th=[ 502], 99.95th=[ 506], 00:13:35.951 | 99.99th=[ 506] 00:13:35.951 bw ( KiB/s): min= 8192, max= 8192, per=51.60%, avg=8192.00, stdev= 0.00, samples=1 00:13:35.951 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:35.951 lat (usec) : 250=52.99%, 500=45.94%, 750=0.84% 00:13:35.951 lat (msec) : 50=0.22% 00:13:35.951 cpu : usr=2.80%, sys=6.40%, ctx=2723, majf=0, minf=2 00:13:35.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.951 issued rwts: total=1187,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:35.951 00:13:35.951 Run status group 0 (all jobs): 00:13:35.951 READ: bw=11.6MiB/s (12.1MB/s), 260KiB/s-5271KiB/s (266kB/s-5397kB/s), io=11.9MiB (12.5MB), run=1001-1032msec 00:13:35.951 WRITE: bw=15.5MiB/s (16.3MB/s), 1984KiB/s-6138KiB/s (2032kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1032msec 00:13:35.951 00:13:35.951 Disk stats (read/write): 00:13:35.951 nvme0n1: ios=1070/1241, merge=0/0, ticks=1501/247, in_queue=1748, util=99.20% 00:13:35.951 nvme0n2: ios=67/512, merge=0/0, ticks=1457/87, in_queue=1544, util=99.19% 00:13:35.951 nvme0n3: ios=379/512, merge=0/0, ticks=1112/124, in_queue=1236, util=97.71% 00:13:35.951 nvme0n4: ios=1072/1067, merge=0/0, ticks=564/226, in_queue=790, util=90.65% 00:13:35.951 13:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:35.951 [global] 00:13:35.951 thread=1 00:13:35.951 invalidate=1 00:13:35.951 rw=write 00:13:35.951 time_based=1 00:13:35.951 runtime=1 00:13:35.951 ioengine=libaio 00:13:35.951 direct=1 00:13:35.951 bs=4096 00:13:35.952 iodepth=128 00:13:35.952 norandommap=0 00:13:35.952 numjobs=1 00:13:35.952 00:13:35.952 verify_dump=1 00:13:35.952 verify_backlog=512 00:13:35.952 verify_state_save=0 00:13:35.952 do_verify=1 00:13:35.952 verify=crc32c-intel 00:13:35.952 [job0] 00:13:35.952 filename=/dev/nvme0n1 00:13:35.952 [job1] 00:13:35.952 filename=/dev/nvme0n2 00:13:35.952 [job2] 00:13:35.952 filename=/dev/nvme0n3 00:13:35.952 [job3] 00:13:35.952 filename=/dev/nvme0n4 00:13:35.952 Could not set queue depth (nvme0n1) 00:13:35.952 Could not set queue depth (nvme0n2) 00:13:35.952 Could not set queue depth (nvme0n3) 00:13:35.952 Could not set queue depth (nvme0n4) 00:13:35.952 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:35.952 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:35.952 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:35.952 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:35.952 fio-3.35 00:13:35.952 Starting 4 threads 00:13:37.329 00:13:37.329 job0: (groupid=0, jobs=1): err= 0: pid=2172450: Thu Dec 5 13:45:08 2024 00:13:37.329 read: IOPS=3570, BW=13.9MiB/s (14.6MB/s)(14.1MiB/1008msec) 00:13:37.329 slat (usec): min=2, max=24513, avg=125.70, stdev=814.81 00:13:37.329 clat (usec): min=6823, max=63312, avg=15573.01, stdev=8353.68 00:13:37.329 lat (usec): min=6910, max=63328, avg=15698.71, stdev=8408.77 00:13:37.329 clat percentiles (usec): 00:13:37.329 | 1.00th=[ 7832], 5.00th=[ 8979], 10.00th=[10290], 20.00th=[11076], 00:13:37.329 | 30.00th=[11600], 40.00th=[13173], 50.00th=[13829], 60.00th=[14615], 00:13:37.329 | 70.00th=[15926], 80.00th=[17695], 90.00th=[20317], 95.00th=[25035], 00:13:37.329 | 99.00th=[59507], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:13:37.329 | 99.99th=[63177] 00:13:37.329 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:13:37.329 slat (usec): min=3, max=11963, avg=125.48, stdev=606.96 00:13:37.329 clat (usec): min=5590, max=58031, avg=17469.40, stdev=10259.86 00:13:37.329 lat (usec): min=5603, max=58053, avg=17594.88, stdev=10320.01 00:13:37.329 clat percentiles (usec): 00:13:37.329 | 1.00th=[ 7177], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11469], 00:13:37.329 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13435], 60.00th=[15401], 00:13:37.329 | 70.00th=[17957], 80.00th=[21365], 90.00th=[26346], 95.00th=[44827], 00:13:37.329 | 99.00th=[57934], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:13:37.329 | 99.99th=[57934] 00:13:37.329 bw ( KiB/s): min=12616, max=19256, per=23.91%, avg=15936.00, stdev=4695.19, samples=2 00:13:37.329 iops : min= 3154, max= 4814, avg=3984.00, stdev=1173.80, samples=2 00:13:37.329 lat (msec) : 10=8.08%, 20=74.88%, 50=13.70%, 100=3.34% 00:13:37.329 cpu : usr=2.98%, sys=8.54%, ctx=533, majf=0, minf=1 00:13:37.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:37.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:37.329 issued rwts: total=3599,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.329 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:37.329 job1: (groupid=0, jobs=1): err= 0: pid=2172451: Thu Dec 5 13:45:08 2024 00:13:37.329 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.2MiB/1014msec) 00:13:37.329 slat (usec): min=2, max=11319, avg=95.68, stdev=667.30 00:13:37.329 clat (usec): min=2770, max=29408, avg=12035.95, stdev=3179.31 00:13:37.329 lat (usec): min=2774, max=29414, avg=12131.63, stdev=3216.76 00:13:37.329 clat percentiles (usec): 00:13:37.329 | 1.00th=[ 3326], 5.00th=[ 6915], 10.00th=[ 9896], 20.00th=[10683], 00:13:37.329 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11863], 00:13:37.329 | 70.00th=[11994], 80.00th=[12780], 90.00th=[16057], 95.00th=[18744], 00:13:37.329 | 99.00th=[22414], 99.50th=[23200], 99.90th=[29492], 99.95th=[29492], 00:13:37.329 | 99.99th=[29492] 00:13:37.329 write: IOPS=5554, BW=21.7MiB/s (22.8MB/s)(22.0MiB/1014msec); 0 zone resets 00:13:37.330 slat (usec): min=3, max=12420, avg=81.18, stdev=509.58 00:13:37.330 clat (usec): min=259, max=35524, avg=11741.53, stdev=4028.07 00:13:37.330 lat (usec): min=274, max=35533, avg=11822.71, stdev=4067.71 00:13:37.330 clat percentiles (usec): 00:13:37.330 | 1.00th=[ 3097], 5.00th=[ 5014], 10.00th=[ 7308], 20.00th=[ 9634], 00:13:37.330 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:13:37.330 | 70.00th=[12125], 80.00th=[13304], 90.00th=[15795], 95.00th=[19792], 00:13:37.330 | 99.00th=[25560], 99.50th=[31327], 99.90th=[35390], 99.95th=[35390], 00:13:37.330 | 99.99th=[35390] 00:13:37.330 bw ( KiB/s): min=21376, max=23024, per=33.31%, avg=22200.00, stdev=1165.31, samples=2 00:13:37.330 iops : min= 5344, max= 5756, avg=5550.00, stdev=291.33, samples=2 00:13:37.330 lat (usec) : 500=0.01%, 1000=0.14% 00:13:37.330 lat (msec) : 4=2.19%, 10=14.48%, 20=79.67%, 50=3.50% 00:13:37.330 cpu : usr=3.85%, sys=6.12%, ctx=545, majf=0, minf=1 00:13:37.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:37.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:37.330 issued rwts: total=5166,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.330 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:37.330 job2: (groupid=0, jobs=1): err= 0: pid=2172453: Thu Dec 5 13:45:08 2024 00:13:37.330 read: IOPS=3551, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:13:37.330 slat (usec): min=3, max=22450, avg=124.54, stdev=939.18 00:13:37.330 clat (usec): min=4759, max=57301, avg=16292.69, stdev=6985.76 00:13:37.330 lat (usec): min=4771, max=57317, avg=16417.23, stdev=7061.92 00:13:37.330 clat percentiles (usec): 00:13:37.330 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[12387], 20.00th=[12780], 00:13:37.330 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13435], 60.00th=[14484], 00:13:37.330 | 70.00th=[16057], 80.00th=[18220], 90.00th=[22152], 95.00th=[33817], 00:13:37.330 | 99.00th=[42730], 99.50th=[47449], 99.90th=[49021], 99.95th=[53216], 00:13:37.330 | 99.99th=[57410] 00:13:37.330 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:13:37.330 slat (usec): min=3, max=14981, avg=122.51, stdev=653.80 00:13:37.330 clat (usec): min=3053, max=54060, avg=16749.97, stdev=8058.99 00:13:37.330 lat (usec): min=3061, max=54076, avg=16872.48, stdev=8120.43 00:13:37.330 clat percentiles (usec): 00:13:37.330 | 1.00th=[ 5407], 5.00th=[ 8586], 10.00th=[11338], 20.00th=[12125], 00:13:37.330 | 30.00th=[12911], 40.00th=[13566], 50.00th=[14353], 60.00th=[14877], 00:13:37.330 | 70.00th=[17695], 80.00th=[20317], 90.00th=[25297], 95.00th=[31065], 00:13:37.330 | 99.00th=[51119], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:13:37.330 | 99.99th=[54264] 00:13:37.330 bw ( KiB/s): min=15496, max=16304, per=23.86%, avg=15900.00, stdev=571.34, samples=2 00:13:37.330 iops : min= 3874, max= 4076, avg=3975.00, stdev=142.84, samples=2 00:13:37.330 lat (msec) : 4=0.26%, 10=5.76%, 20=75.07%, 50=18.16%, 100=0.74% 00:13:37.330 cpu : usr=5.64%, sys=6.44%, ctx=445, majf=0, minf=1 00:13:37.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:37.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:37.330 issued rwts: total=3591,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.330 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:37.330 job3: (groupid=0, jobs=1): err= 0: pid=2172454: Thu Dec 5 13:45:08 2024 00:13:37.330 read: IOPS=2610, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1005msec) 00:13:37.330 slat (usec): min=2, max=30030, avg=175.44, stdev=1287.38 00:13:37.330 clat (usec): min=1976, max=75891, avg=20186.21, stdev=15307.46 00:13:37.330 lat (usec): min=4382, max=75905, avg=20361.65, stdev=15389.12 00:13:37.330 clat percentiles (usec): 00:13:37.330 | 1.00th=[ 5342], 5.00th=[10421], 10.00th=[11076], 20.00th=[12518], 00:13:37.330 | 30.00th=[13829], 40.00th=[14353], 50.00th=[14615], 60.00th=[15270], 00:13:37.330 | 70.00th=[15401], 80.00th=[23200], 90.00th=[43779], 95.00th=[60556], 00:13:37.330 | 99.00th=[76022], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:13:37.330 | 99.99th=[76022] 00:13:37.330 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:13:37.330 slat (usec): min=3, max=12211, avg=165.99, stdev=937.82 00:13:37.330 clat (usec): min=321, max=75839, avg=23700.08, stdev=11810.28 00:13:37.330 lat (usec): min=568, max=75874, avg=23866.06, stdev=11857.39 00:13:37.330 clat percentiles (usec): 00:13:37.330 | 1.00th=[ 2180], 5.00th=[ 7832], 10.00th=[11731], 20.00th=[13304], 00:13:37.330 | 30.00th=[14484], 40.00th=[15401], 50.00th=[20317], 60.00th=[27657], 00:13:37.330 | 70.00th=[32900], 80.00th=[36439], 90.00th=[38536], 95.00th=[41681], 00:13:37.330 | 99.00th=[47973], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:13:37.330 | 99.99th=[76022] 00:13:37.330 bw ( KiB/s): min=11776, max=12312, per=18.07%, avg=12044.00, stdev=379.01, samples=2 00:13:37.330 iops : min= 2944, max= 3078, avg=3011.00, stdev=94.75, samples=2 00:13:37.330 lat (usec) : 500=0.02%, 750=0.04%, 1000=0.09% 00:13:37.330 lat (msec) : 2=0.37%, 4=0.37%, 10=4.60%, 20=56.55%, 50=33.58% 00:13:37.330 lat (msec) : 100=4.39% 00:13:37.330 cpu : usr=2.89%, sys=4.68%, ctx=274, majf=0, minf=1 00:13:37.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:37.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:37.330 issued rwts: total=2624,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.330 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:37.330 00:13:37.330 Run status group 0 (all jobs): 00:13:37.330 READ: bw=57.7MiB/s (60.5MB/s), 10.2MiB/s-19.9MiB/s (10.7MB/s-20.9MB/s), io=58.5MiB (61.4MB), run=1005-1014msec 00:13:37.330 WRITE: bw=65.1MiB/s (68.2MB/s), 11.9MiB/s-21.7MiB/s (12.5MB/s-22.8MB/s), io=66.0MiB (69.2MB), run=1005-1014msec 00:13:37.330 00:13:37.330 Disk stats (read/write): 00:13:37.330 nvme0n1: ios=3305/3584, merge=0/0, ticks=26294/26137, in_queue=52431, util=97.80% 00:13:37.330 nvme0n2: ios=4621/4711, merge=0/0, ticks=44003/41138, in_queue=85141, util=86.80% 00:13:37.330 nvme0n3: ios=3444/3584, merge=0/0, ticks=33964/35300, in_queue=69264, util=97.81% 00:13:37.330 nvme0n4: ios=2092/2366, merge=0/0, ticks=15426/18914, in_queue=34340, util=99.58% 00:13:37.330 13:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:37.330 [global] 00:13:37.330 thread=1 00:13:37.330 invalidate=1 00:13:37.330 rw=randwrite 00:13:37.330 time_based=1 00:13:37.330 runtime=1 00:13:37.330 ioengine=libaio 00:13:37.330 direct=1 00:13:37.330 bs=4096 00:13:37.330 iodepth=128 00:13:37.330 norandommap=0 00:13:37.330 numjobs=1 00:13:37.330 00:13:37.330 verify_dump=1 00:13:37.330 verify_backlog=512 00:13:37.330 verify_state_save=0 00:13:37.330 do_verify=1 00:13:37.330 verify=crc32c-intel 00:13:37.330 [job0] 00:13:37.330 filename=/dev/nvme0n1 00:13:37.330 [job1] 00:13:37.330 filename=/dev/nvme0n2 00:13:37.330 [job2] 00:13:37.330 filename=/dev/nvme0n3 00:13:37.330 [job3] 00:13:37.330 filename=/dev/nvme0n4 00:13:37.330 Could not set queue depth (nvme0n1) 00:13:37.330 Could not set queue depth (nvme0n2) 00:13:37.330 Could not set queue depth (nvme0n3) 00:13:37.330 Could not set queue depth (nvme0n4) 00:13:37.588 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:37.588 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:37.588 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:37.588 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:37.588 fio-3.35 00:13:37.588 Starting 4 threads 00:13:38.967 00:13:38.967 job0: (groupid=0, jobs=1): err= 0: pid=2172986: Thu Dec 5 13:45:10 2024 00:13:38.967 read: IOPS=4487, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1002msec) 00:13:38.967 slat (usec): min=3, max=16418, avg=112.03, stdev=743.95 00:13:38.967 clat (usec): min=1315, max=64511, avg=13960.38, stdev=8852.38 00:13:38.967 lat (usec): min=1328, max=64516, avg=14072.41, stdev=8910.73 00:13:38.967 clat percentiles (usec): 00:13:38.967 | 1.00th=[ 4555], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10421], 00:13:38.967 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:13:38.967 | 70.00th=[12125], 80.00th=[13698], 90.00th=[23725], 95.00th=[31327], 00:13:38.967 | 99.00th=[55837], 99.50th=[59507], 99.90th=[59507], 99.95th=[64750], 00:13:38.967 | 99.99th=[64750] 00:13:38.967 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:13:38.967 slat (usec): min=4, max=9882, avg=96.66, stdev=543.80 00:13:38.967 clat (usec): min=1382, max=39408, avg=13891.26, stdev=6912.26 00:13:38.967 lat (usec): min=1397, max=39425, avg=13987.92, stdev=6953.60 00:13:38.967 clat percentiles (usec): 00:13:38.967 | 1.00th=[ 4359], 5.00th=[ 8160], 10.00th=[ 9634], 20.00th=[10683], 00:13:38.967 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11469], 60.00th=[11600], 00:13:38.967 | 70.00th=[12125], 80.00th=[15008], 90.00th=[26346], 95.00th=[31589], 00:13:38.967 | 99.00th=[36963], 99.50th=[36963], 99.90th=[39584], 99.95th=[39584], 00:13:38.967 | 99.99th=[39584] 00:13:38.967 bw ( KiB/s): min=17960, max=18904, per=27.96%, avg=18432.00, stdev=667.51, samples=2 00:13:38.967 iops : min= 4490, max= 4726, avg=4608.00, stdev=166.88, samples=2 00:13:38.967 lat (msec) : 2=0.32%, 4=0.26%, 10=11.94%, 20=73.74%, 50=12.60% 00:13:38.967 lat (msec) : 100=1.14% 00:13:38.967 cpu : usr=6.29%, sys=9.09%, ctx=464, majf=0, minf=2 00:13:38.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:38.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:38.967 issued rwts: total=4496,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:38.967 job1: (groupid=0, jobs=1): err= 0: pid=2172987: Thu Dec 5 13:45:10 2024 00:13:38.967 read: IOPS=5553, BW=21.7MiB/s (22.7MB/s)(21.8MiB/1004msec) 00:13:38.967 slat (usec): min=2, max=11475, avg=88.54, stdev=543.59 00:13:38.967 clat (usec): min=770, max=23282, avg=11240.25, stdev=1788.36 00:13:38.967 lat (usec): min=3221, max=23288, avg=11328.79, stdev=1834.29 00:13:38.967 clat percentiles (usec): 00:13:38.967 | 1.00th=[ 5669], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10552], 00:13:38.967 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11469], 00:13:38.967 | 70.00th=[11731], 80.00th=[12256], 90.00th=[13173], 95.00th=[13960], 00:13:38.967 | 99.00th=[16712], 99.50th=[18744], 99.90th=[20055], 99.95th=[23200], 00:13:38.967 | 99.99th=[23200] 00:13:38.967 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:13:38.967 slat (usec): min=3, max=10666, avg=81.47, stdev=475.88 00:13:38.967 clat (usec): min=1352, max=23609, avg=11424.84, stdev=1767.17 00:13:38.967 lat (usec): min=1360, max=23667, avg=11506.31, stdev=1802.38 00:13:38.967 clat percentiles (usec): 00:13:38.967 | 1.00th=[ 4359], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10552], 00:13:38.967 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11863], 00:13:38.967 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[14222], 00:13:38.967 | 99.00th=[16057], 99.50th=[16909], 99.90th=[20841], 99.95th=[22676], 00:13:38.967 | 99.99th=[23725] 00:13:38.967 bw ( KiB/s): min=21128, max=23928, per=34.17%, avg=22528.00, stdev=1979.90, samples=2 00:13:38.967 iops : min= 5282, max= 5982, avg=5632.00, stdev=494.97, samples=2 00:13:38.967 lat (usec) : 1000=0.01% 00:13:38.967 lat (msec) : 2=0.14%, 4=0.65%, 10=10.47%, 20=88.60%, 50=0.13% 00:13:38.967 cpu : usr=4.69%, sys=8.37%, ctx=498, majf=0, minf=1 00:13:38.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:38.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:38.967 issued rwts: total=5576,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:38.967 job2: (groupid=0, jobs=1): err= 0: pid=2172997: Thu Dec 5 13:45:10 2024 00:13:38.967 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:13:38.967 slat (usec): min=3, max=13827, avg=140.53, stdev=797.95 00:13:38.967 clat (usec): min=9745, max=40161, avg=18118.79, stdev=3659.71 00:13:38.967 lat (usec): min=9751, max=40177, avg=18259.32, stdev=3717.69 00:13:38.967 clat percentiles (usec): 00:13:38.967 | 1.00th=[12256], 5.00th=[13960], 10.00th=[14877], 20.00th=[15401], 00:13:38.967 | 30.00th=[16188], 40.00th=[16712], 50.00th=[17433], 60.00th=[18220], 00:13:38.967 | 70.00th=[18744], 80.00th=[19792], 90.00th=[21890], 95.00th=[26346], 00:13:38.967 | 99.00th=[32113], 99.50th=[32113], 99.90th=[32113], 99.95th=[34866], 00:13:38.967 | 99.99th=[40109] 00:13:38.967 write: IOPS=3369, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1004msec); 0 zone resets 00:13:38.967 slat (usec): min=4, max=10773, avg=158.30, stdev=782.88 00:13:38.967 clat (usec): min=585, max=39302, avg=20945.44, stdev=6357.97 00:13:38.967 lat (usec): min=5485, max=39311, avg=21103.75, stdev=6405.71 00:13:38.967 clat percentiles (usec): 00:13:38.967 | 1.00th=[ 9765], 5.00th=[13304], 10.00th=[14222], 20.00th=[15533], 00:13:38.967 | 30.00th=[17171], 40.00th=[17957], 50.00th=[19268], 60.00th=[22414], 00:13:38.967 | 70.00th=[23725], 80.00th=[24773], 90.00th=[30016], 95.00th=[33817], 00:13:38.967 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:13:38.967 | 99.99th=[39060] 00:13:38.967 bw ( KiB/s): min=13010, max=13056, per=19.77%, avg=13033.00, stdev=32.53, samples=2 00:13:38.967 iops : min= 3252, max= 3264, avg=3258.00, stdev= 8.49, samples=2 00:13:38.967 lat (usec) : 750=0.02% 00:13:38.967 lat (msec) : 10=0.64%, 20=65.02%, 50=34.33% 00:13:38.967 cpu : usr=3.09%, sys=8.47%, ctx=328, majf=0, minf=1 00:13:38.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:38.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:38.967 issued rwts: total=3072,3383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:38.967 job3: (groupid=0, jobs=1): err= 0: pid=2173003: Thu Dec 5 13:45:10 2024 00:13:38.967 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:13:38.967 slat (usec): min=3, max=49280, avg=206.36, stdev=1581.86 00:13:38.967 clat (usec): min=9736, max=90978, avg=28082.82, stdev=19739.21 00:13:38.967 lat (usec): min=9752, max=91001, avg=28289.18, stdev=19838.19 00:13:38.967 clat percentiles (usec): 00:13:38.967 | 1.00th=[ 9896], 5.00th=[12256], 10.00th=[12518], 20.00th=[12780], 00:13:38.967 | 30.00th=[13042], 40.00th=[13698], 50.00th=[15795], 60.00th=[28443], 00:13:38.967 | 70.00th=[32637], 80.00th=[46400], 90.00th=[56886], 95.00th=[66323], 00:13:38.967 | 99.00th=[90702], 99.50th=[90702], 99.90th=[90702], 99.95th=[90702], 00:13:38.967 | 99.99th=[90702] 00:13:38.967 write: IOPS=2914, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1004msec); 0 zone resets 00:13:38.967 slat (usec): min=4, max=17698, avg=151.89, stdev=902.82 00:13:38.967 clat (usec): min=622, max=57851, avg=18556.07, stdev=9285.89 00:13:38.967 lat (usec): min=6028, max=57857, avg=18707.97, stdev=9328.12 00:13:38.967 clat percentiles (usec): 00:13:38.967 | 1.00th=[ 6849], 5.00th=[10028], 10.00th=[11731], 20.00th=[11994], 00:13:38.967 | 30.00th=[12780], 40.00th=[13304], 50.00th=[15008], 60.00th=[18482], 00:13:38.967 | 70.00th=[19792], 80.00th=[23725], 90.00th=[29754], 95.00th=[40109], 00:13:38.967 | 99.00th=[53216], 99.50th=[53216], 99.90th=[57934], 99.95th=[57934], 00:13:38.967 | 99.99th=[57934] 00:13:38.967 bw ( KiB/s): min=10096, max=12288, per=16.97%, avg=11192.00, stdev=1549.98, samples=2 00:13:38.967 iops : min= 2524, max= 3072, avg=2798.00, stdev=387.49, samples=2 00:13:38.967 lat (usec) : 750=0.02% 00:13:38.967 lat (msec) : 10=3.12%, 20=59.28%, 50=29.68%, 100=7.91% 00:13:38.967 cpu : usr=2.49%, sys=6.88%, ctx=302, majf=0, minf=1 00:13:38.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:38.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:38.967 issued rwts: total=2560,2926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:38.967 00:13:38.967 Run status group 0 (all jobs): 00:13:38.967 READ: bw=61.1MiB/s (64.1MB/s), 9.96MiB/s-21.7MiB/s (10.4MB/s-22.7MB/s), io=61.3MiB (64.3MB), run=1002-1004msec 00:13:38.967 WRITE: bw=64.4MiB/s (67.5MB/s), 11.4MiB/s-21.9MiB/s (11.9MB/s-23.0MB/s), io=64.6MiB (67.8MB), run=1002-1004msec 00:13:38.967 00:13:38.968 Disk stats (read/write): 00:13:38.968 nvme0n1: ios=4146/4230, merge=0/0, ticks=24670/23328, in_queue=47998, util=89.28% 00:13:38.968 nvme0n2: ios=4658/4921, merge=0/0, ticks=28665/28946, in_queue=57611, util=93.70% 00:13:38.968 nvme0n3: ios=2584/2816, merge=0/0, ticks=23799/27176, in_queue=50975, util=97.70% 00:13:38.968 nvme0n4: ios=2093/2080, merge=0/0, ticks=19306/15284, in_queue=34590, util=96.00% 00:13:38.968 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:38.968 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2173172 00:13:38.968 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:38.968 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:38.968 [global] 00:13:38.968 thread=1 00:13:38.968 invalidate=1 00:13:38.968 rw=read 00:13:38.968 time_based=1 00:13:38.968 runtime=10 00:13:38.968 ioengine=libaio 00:13:38.968 direct=1 00:13:38.968 bs=4096 00:13:38.968 iodepth=1 00:13:38.968 norandommap=1 00:13:38.968 numjobs=1 00:13:38.968 00:13:38.968 [job0] 00:13:38.968 filename=/dev/nvme0n1 00:13:38.968 [job1] 00:13:38.968 filename=/dev/nvme0n2 00:13:38.968 [job2] 00:13:38.968 filename=/dev/nvme0n3 00:13:38.968 [job3] 00:13:38.968 filename=/dev/nvme0n4 00:13:38.968 Could not set queue depth (nvme0n1) 00:13:38.968 Could not set queue depth (nvme0n2) 00:13:38.968 Could not set queue depth (nvme0n3) 00:13:38.968 Could not set queue depth (nvme0n4) 00:13:38.968 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:38.968 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:38.968 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:38.968 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:38.968 fio-3.35 00:13:38.968 Starting 4 threads 00:13:42.251 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:42.251 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:42.251 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=26193920, buflen=4096 00:13:42.251 fio: pid=2173370, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:42.251 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=31408128, buflen=4096 00:13:42.251 fio: pid=2173356, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:42.251 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:42.251 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:42.508 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:42.508 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:42.508 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=15597568, buflen=4096 00:13:42.508 fio: pid=2173301, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:42.765 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:42.766 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:42.766 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5230592, buflen=4096 00:13:42.766 fio: pid=2173321, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:43.028 00:13:43.028 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2173301: Thu Dec 5 13:45:14 2024 00:13:43.028 read: IOPS=1090, BW=4359KiB/s (4464kB/s)(14.9MiB/3494msec) 00:13:43.029 slat (usec): min=4, max=10927, avg=13.05, stdev=218.22 00:13:43.029 clat (usec): min=166, max=41120, avg=896.35, stdev=5195.97 00:13:43.029 lat (usec): min=172, max=51986, avg=909.40, stdev=5224.33 00:13:43.029 clat percentiles (usec): 00:13:43.029 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:13:43.029 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:13:43.029 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 262], 95.00th=[ 289], 00:13:43.029 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:43.029 | 99.99th=[41157] 00:13:43.029 bw ( KiB/s): min= 96, max=15304, per=24.73%, avg=5033.33, stdev=7208.63, samples=6 00:13:43.029 iops : min= 24, max= 3826, avg=1258.33, stdev=1802.16, samples=6 00:13:43.029 lat (usec) : 250=86.35%, 500=11.58%, 750=0.32% 00:13:43.029 lat (msec) : 4=0.08%, 50=1.65% 00:13:43.029 cpu : usr=0.34%, sys=1.03%, ctx=3812, majf=0, minf=2 00:13:43.029 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.029 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.029 issued rwts: total=3809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.029 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.029 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2173321: Thu Dec 5 13:45:14 2024 00:13:43.029 read: IOPS=339, BW=1357KiB/s (1390kB/s)(5108KiB/3763msec) 00:13:43.029 slat (usec): min=4, max=15875, avg=43.53, stdev=659.11 00:13:43.029 clat (usec): min=170, max=41145, avg=2881.26, stdev=10042.94 00:13:43.029 lat (usec): min=176, max=53984, avg=2915.56, stdev=10096.75 00:13:43.029 clat percentiles (usec): 00:13:43.029 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 200], 00:13:43.029 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 225], 00:13:43.029 | 70.00th=[ 258], 80.00th=[ 289], 90.00th=[ 338], 95.00th=[41157], 00:13:43.029 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:43.029 | 99.99th=[41157] 00:13:43.029 bw ( KiB/s): min= 96, max= 4840, per=6.90%, avg=1404.43, stdev=1848.49, samples=7 00:13:43.029 iops : min= 24, max= 1210, avg=351.00, stdev=462.09, samples=7 00:13:43.029 lat (usec) : 250=69.09%, 500=24.18%, 750=0.08% 00:13:43.029 lat (msec) : 4=0.08%, 50=6.49% 00:13:43.029 cpu : usr=0.29%, sys=0.32%, ctx=1283, majf=0, minf=2 00:13:43.029 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.029 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.029 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.029 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.029 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2173356: Thu Dec 5 13:45:14 2024 00:13:43.029 read: IOPS=2420, BW=9679KiB/s (9911kB/s)(30.0MiB/3169msec) 00:13:43.029 slat (nsec): min=4024, max=57423, avg=9837.00, stdev=6596.17 00:13:43.029 clat (usec): min=171, max=41008, avg=399.86, stdev=2675.15 00:13:43.029 lat (usec): min=181, max=41024, avg=409.70, stdev=2676.05 00:13:43.029 clat percentiles (usec): 00:13:43.029 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:13:43.029 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:13:43.029 | 70.00th=[ 227], 80.00th=[ 243], 90.00th=[ 265], 95.00th=[ 285], 00:13:43.029 | 99.00th=[ 416], 99.50th=[ 506], 99.90th=[41157], 99.95th=[41157], 00:13:43.029 | 99.99th=[41157] 00:13:43.030 bw ( KiB/s): min= 96, max=17744, per=45.67%, avg=9296.00, stdev=7832.59, samples=6 00:13:43.030 iops : min= 24, max= 4436, avg=2324.00, stdev=1958.15, samples=6 00:13:43.030 lat (usec) : 250=84.97%, 500=14.50%, 750=0.08% 00:13:43.030 lat (msec) : 50=0.44% 00:13:43.030 cpu : usr=1.26%, sys=2.46%, ctx=7669, majf=0, minf=1 00:13:43.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.030 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.030 issued rwts: total=7669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.030 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2173370: Thu Dec 5 13:45:14 2024 00:13:43.030 read: IOPS=2207, BW=8830KiB/s (9042kB/s)(25.0MiB/2897msec) 00:13:43.030 slat (nsec): min=4087, max=69062, avg=10992.50, stdev=7271.60 00:13:43.030 clat (usec): min=173, max=41204, avg=435.71, stdev=2920.75 00:13:43.030 lat (usec): min=178, max=41210, avg=446.71, stdev=2921.77 00:13:43.030 clat percentiles (usec): 00:13:43.030 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:13:43.030 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:13:43.030 | 70.00th=[ 223], 80.00th=[ 247], 90.00th=[ 289], 95.00th=[ 322], 00:13:43.030 | 99.00th=[ 441], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:43.030 | 99.99th=[41157] 00:13:43.030 bw ( KiB/s): min= 1880, max=17488, per=50.16%, avg=10209.60, stdev=6530.27, samples=5 00:13:43.030 iops : min= 470, max= 4372, avg=2552.40, stdev=1632.57, samples=5 00:13:43.030 lat (usec) : 250=80.47%, 500=18.79%, 750=0.17% 00:13:43.030 lat (msec) : 2=0.02%, 4=0.02%, 50=0.52% 00:13:43.030 cpu : usr=1.00%, sys=2.90%, ctx=6396, majf=0, minf=1 00:13:43.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.030 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.030 issued rwts: total=6396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.030 00:13:43.030 Run status group 0 (all jobs): 00:13:43.030 READ: bw=19.9MiB/s (20.8MB/s), 1357KiB/s-9679KiB/s (1390kB/s-9911kB/s), io=74.8MiB (78.4MB), run=2897-3763msec 00:13:43.030 00:13:43.030 Disk stats (read/write): 00:13:43.030 nvme0n1: ios=3805/0, merge=0/0, ticks=3268/0, in_queue=3268, util=95.34% 00:13:43.030 nvme0n2: ios=1302/0, merge=0/0, ticks=4171/0, in_queue=4171, util=98.34% 00:13:43.030 nvme0n3: ios=7435/0, merge=0/0, ticks=2938/0, in_queue=2938, util=96.72% 00:13:43.030 nvme0n4: ios=6390/0, merge=0/0, ticks=2672/0, in_queue=2672, util=96.74% 00:13:43.030 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:43.287 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:43.545 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:43.545 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:43.802 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:43.802 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:44.059 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:44.059 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:44.316 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:44.316 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2173172 00:13:44.316 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:44.316 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.316 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.316 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:13:44.316 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:44.316 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.316 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:44.316 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.316 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:13:44.316 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:44.316 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:44.316 nvmf hotplug test: fio failed as expected 00:13:44.316 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:44.882 rmmod nvme_tcp 00:13:44.882 rmmod nvme_fabrics 00:13:44.882 rmmod nvme_keyring 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2170509 ']' 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2170509 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2170509 ']' 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2170509 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2170509 00:13:44.882 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.883 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.883 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2170509' 00:13:44.883 killing process with pid 2170509 00:13:44.883 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2170509 00:13:44.883 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2170509 00:13:45.143 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:45.143 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:45.143 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:45.143 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:45.143 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:13:45.143 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:45.143 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:45.143 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:45.143 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:45.143 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.143 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.143 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.047 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:47.047 00:13:47.047 real 0m24.422s 00:13:47.047 user 1m25.791s 00:13:47.047 sys 0m7.083s 00:13:47.047 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.047 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.047 ************************************ 00:13:47.047 END TEST nvmf_fio_target 00:13:47.047 ************************************ 00:13:47.047 13:45:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:47.047 13:45:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:47.047 13:45:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.047 13:45:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:47.047 ************************************ 00:13:47.047 START TEST nvmf_bdevio 00:13:47.047 ************************************ 00:13:47.047 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:47.306 * Looking for test storage... 00:13:47.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:47.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.306 --rc genhtml_branch_coverage=1 00:13:47.306 --rc genhtml_function_coverage=1 00:13:47.306 --rc genhtml_legend=1 00:13:47.306 --rc geninfo_all_blocks=1 00:13:47.306 --rc geninfo_unexecuted_blocks=1 00:13:47.306 00:13:47.306 ' 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:47.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.306 --rc genhtml_branch_coverage=1 00:13:47.306 --rc genhtml_function_coverage=1 00:13:47.306 --rc genhtml_legend=1 00:13:47.306 --rc geninfo_all_blocks=1 00:13:47.306 --rc geninfo_unexecuted_blocks=1 00:13:47.306 00:13:47.306 ' 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:47.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.306 --rc genhtml_branch_coverage=1 00:13:47.306 --rc genhtml_function_coverage=1 00:13:47.306 --rc genhtml_legend=1 00:13:47.306 --rc geninfo_all_blocks=1 00:13:47.306 --rc geninfo_unexecuted_blocks=1 00:13:47.306 00:13:47.306 ' 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:47.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.306 --rc genhtml_branch_coverage=1 00:13:47.306 --rc genhtml_function_coverage=1 00:13:47.306 --rc genhtml_legend=1 00:13:47.306 --rc geninfo_all_blocks=1 00:13:47.306 --rc geninfo_unexecuted_blocks=1 00:13:47.306 00:13:47.306 ' 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.306 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:47.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:13:47.307 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:49.841 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:49.841 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:49.841 Found net devices under 0000:09:00.0: cvl_0_0 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.841 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:49.842 Found net devices under 0000:09:00.1: cvl_0_1 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:49.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:13:49.842 00:13:49.842 --- 10.0.0.2 ping statistics --- 00:13:49.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.842 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:13:49.842 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:49.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:13:49.842 00:13:49.842 --- 10.0.0.1 ping statistics --- 00:13:49.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.842 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2176026 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2176026 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2176026 ']' 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:49.842 [2024-12-05 13:45:21.086757] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:49.842 [2024-12-05 13:45:21.086851] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.842 [2024-12-05 13:45:21.156790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.842 [2024-12-05 13:45:21.211861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.842 [2024-12-05 13:45:21.211925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.842 [2024-12-05 13:45:21.211938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.842 [2024-12-05 13:45:21.211949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.842 [2024-12-05 13:45:21.211958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.842 [2024-12-05 13:45:21.213397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:49.842 [2024-12-05 13:45:21.213459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:49.842 [2024-12-05 13:45:21.213528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:49.842 [2024-12-05 13:45:21.213531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.842 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:50.111 [2024-12-05 13:45:21.367392] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.111 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.111 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:50.111 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.111 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:50.111 Malloc0 00:13:50.111 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.111 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:50.111 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.111 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:50.112 [2024-12-05 13:45:21.436713] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:50.112 { 00:13:50.112 "params": { 00:13:50.112 "name": "Nvme$subsystem", 00:13:50.112 "trtype": "$TEST_TRANSPORT", 00:13:50.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:50.112 "adrfam": "ipv4", 00:13:50.112 "trsvcid": "$NVMF_PORT", 00:13:50.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:50.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:50.112 "hdgst": ${hdgst:-false}, 00:13:50.112 "ddgst": ${ddgst:-false} 00:13:50.112 }, 00:13:50.112 "method": "bdev_nvme_attach_controller" 00:13:50.112 } 00:13:50.112 EOF 00:13:50.112 )") 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:13:50.112 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:50.112 "params": { 00:13:50.112 "name": "Nvme1", 00:13:50.112 "trtype": "tcp", 00:13:50.112 "traddr": "10.0.0.2", 00:13:50.112 "adrfam": "ipv4", 00:13:50.112 "trsvcid": "4420", 00:13:50.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:50.112 "hdgst": false, 00:13:50.112 "ddgst": false 00:13:50.112 }, 00:13:50.112 "method": "bdev_nvme_attach_controller" 00:13:50.112 }' 00:13:50.112 [2024-12-05 13:45:21.488902] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:50.112 [2024-12-05 13:45:21.488973] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176049 ] 00:13:50.112 [2024-12-05 13:45:21.558976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:50.112 [2024-12-05 13:45:21.621987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.112 [2024-12-05 13:45:21.622039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.112 [2024-12-05 13:45:21.622043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.369 I/O targets: 00:13:50.369 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:50.369 00:13:50.369 00:13:50.369 CUnit - A unit testing framework for C - Version 2.1-3 00:13:50.369 http://cunit.sourceforge.net/ 00:13:50.369 00:13:50.369 00:13:50.369 Suite: bdevio tests on: Nvme1n1 00:13:50.369 Test: blockdev write read block ...passed 00:13:50.626 Test: blockdev write zeroes read block ...passed 00:13:50.626 Test: blockdev write zeroes read no split ...passed 00:13:50.626 Test: blockdev write zeroes read split ...passed 00:13:50.626 Test: blockdev write zeroes read split partial ...passed 00:13:50.626 Test: blockdev reset ...[2024-12-05 13:45:22.002283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:50.626 [2024-12-05 13:45:22.002383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1407cb0 (9): Bad file descriptor 00:13:50.626 [2024-12-05 13:45:22.098705] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:50.626 passed 00:13:50.626 Test: blockdev write read 8 blocks ...passed 00:13:50.626 Test: blockdev write read size > 128k ...passed 00:13:50.626 Test: blockdev write read invalid size ...passed 00:13:50.884 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:50.884 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:50.884 Test: blockdev write read max offset ...passed 00:13:50.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:50.884 Test: blockdev writev readv 8 blocks ...passed 00:13:50.884 Test: blockdev writev readv 30 x 1block ...passed 00:13:50.884 Test: blockdev writev readv block ...passed 00:13:50.884 Test: blockdev writev readv size > 128k ...passed 00:13:50.884 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:50.884 Test: blockdev comparev and writev ...[2024-12-05 13:45:22.354618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:50.884 [2024-12-05 13:45:22.354654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:50.884 [2024-12-05 13:45:22.354680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:50.884 [2024-12-05 13:45:22.354698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:50.884 [2024-12-05 13:45:22.355008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:50.884 [2024-12-05 13:45:22.355033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:50.884 [2024-12-05 13:45:22.355055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:50.884 [2024-12-05 13:45:22.355071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:50.884 [2024-12-05 13:45:22.355366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:50.884 [2024-12-05 13:45:22.355390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:50.884 [2024-12-05 13:45:22.355412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:50.884 [2024-12-05 13:45:22.355439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:50.884 [2024-12-05 13:45:22.355732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:50.884 [2024-12-05 13:45:22.355756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:50.884 [2024-12-05 13:45:22.355777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:50.884 [2024-12-05 13:45:22.355794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:50.884 passed 00:13:51.143 Test: blockdev nvme passthru rw ...passed 00:13:51.143 Test: blockdev nvme passthru vendor specific ...[2024-12-05 13:45:22.438669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:51.143 [2024-12-05 13:45:22.438697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:51.143 [2024-12-05 13:45:22.438836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:51.143 [2024-12-05 13:45:22.438859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:51.143 [2024-12-05 13:45:22.438994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:51.143 [2024-12-05 13:45:22.439018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:51.143 [2024-12-05 13:45:22.439157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:51.143 [2024-12-05 13:45:22.439191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:51.143 passed 00:13:51.143 Test: blockdev nvme admin passthru ...passed 00:13:51.143 Test: blockdev copy ...passed 00:13:51.143 00:13:51.143 Run Summary: Type Total Ran Passed Failed Inactive 00:13:51.143 suites 1 1 n/a 0 0 00:13:51.143 tests 23 23 23 0 0 00:13:51.143 asserts 152 152 152 0 n/a 00:13:51.143 00:13:51.143 Elapsed time = 1.371 seconds 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:51.403 rmmod nvme_tcp 00:13:51.403 rmmod nvme_fabrics 00:13:51.403 rmmod nvme_keyring 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2176026 ']' 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2176026 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2176026 ']' 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2176026 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2176026 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2176026' 00:13:51.403 killing process with pid 2176026 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2176026 00:13:51.403 13:45:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2176026 00:13:51.660 13:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:51.660 13:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:51.660 13:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:51.660 13:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:51.660 13:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:13:51.660 13:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:51.660 13:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:13:51.660 13:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:51.660 13:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:51.661 13:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.661 13:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.661 13:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:54.201 00:13:54.201 real 0m6.568s 00:13:54.201 user 0m10.491s 00:13:54.201 sys 0m2.202s 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:54.201 ************************************ 00:13:54.201 END TEST nvmf_bdevio 00:13:54.201 ************************************ 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:54.201 00:13:54.201 real 3m56.559s 00:13:54.201 user 10m19.314s 00:13:54.201 sys 1m7.786s 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:54.201 ************************************ 00:13:54.201 END TEST nvmf_target_core 00:13:54.201 ************************************ 00:13:54.201 13:45:25 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:54.201 13:45:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:54.201 13:45:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.201 13:45:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:54.201 ************************************ 00:13:54.201 START TEST nvmf_target_extra 00:13:54.201 ************************************ 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:54.201 * Looking for test storage... 00:13:54.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.201 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:54.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.202 --rc genhtml_branch_coverage=1 00:13:54.202 --rc genhtml_function_coverage=1 00:13:54.202 --rc genhtml_legend=1 00:13:54.202 --rc geninfo_all_blocks=1 00:13:54.202 --rc geninfo_unexecuted_blocks=1 00:13:54.202 00:13:54.202 ' 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:54.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.202 --rc genhtml_branch_coverage=1 00:13:54.202 --rc genhtml_function_coverage=1 00:13:54.202 --rc genhtml_legend=1 00:13:54.202 --rc geninfo_all_blocks=1 00:13:54.202 --rc geninfo_unexecuted_blocks=1 00:13:54.202 00:13:54.202 ' 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:54.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.202 --rc genhtml_branch_coverage=1 00:13:54.202 --rc genhtml_function_coverage=1 00:13:54.202 --rc genhtml_legend=1 00:13:54.202 --rc geninfo_all_blocks=1 00:13:54.202 --rc geninfo_unexecuted_blocks=1 00:13:54.202 00:13:54.202 ' 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:54.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.202 --rc genhtml_branch_coverage=1 00:13:54.202 --rc genhtml_function_coverage=1 00:13:54.202 --rc genhtml_legend=1 00:13:54.202 --rc geninfo_all_blocks=1 00:13:54.202 --rc geninfo_unexecuted_blocks=1 00:13:54.202 00:13:54.202 ' 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.202 ************************************ 00:13:54.202 START TEST nvmf_example 00:13:54.202 ************************************ 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:54.202 * Looking for test storage... 00:13:54.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.202 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:54.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.203 --rc genhtml_branch_coverage=1 00:13:54.203 --rc genhtml_function_coverage=1 00:13:54.203 --rc genhtml_legend=1 00:13:54.203 --rc geninfo_all_blocks=1 00:13:54.203 --rc geninfo_unexecuted_blocks=1 00:13:54.203 00:13:54.203 ' 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:54.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.203 --rc genhtml_branch_coverage=1 00:13:54.203 --rc genhtml_function_coverage=1 00:13:54.203 --rc genhtml_legend=1 00:13:54.203 --rc geninfo_all_blocks=1 00:13:54.203 --rc geninfo_unexecuted_blocks=1 00:13:54.203 00:13:54.203 ' 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:54.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.203 --rc genhtml_branch_coverage=1 00:13:54.203 --rc genhtml_function_coverage=1 00:13:54.203 --rc genhtml_legend=1 00:13:54.203 --rc geninfo_all_blocks=1 00:13:54.203 --rc geninfo_unexecuted_blocks=1 00:13:54.203 00:13:54.203 ' 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:54.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.203 --rc genhtml_branch_coverage=1 00:13:54.203 --rc genhtml_function_coverage=1 00:13:54.203 --rc genhtml_legend=1 00:13:54.203 --rc geninfo_all_blocks=1 00:13:54.203 --rc geninfo_unexecuted_blocks=1 00:13:54.203 00:13:54.203 ' 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:54.203 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:54.204 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.204 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:54.204 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:54.204 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:54.204 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.204 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.204 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.204 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:54.204 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:54.204 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.204 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:56.733 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.733 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:13:56.733 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:56.733 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:56.733 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:56.733 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:56.733 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:56.733 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:13:56.733 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:56.733 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:13:56.733 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:13:56.733 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:56.734 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:56.734 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:56.734 Found net devices under 0000:09:00.0: cvl_0_0 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:56.734 Found net devices under 0000:09:00.1: cvl_0_1 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:56.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:13:56.734 00:13:56.734 --- 10.0.0.2 ping statistics --- 00:13:56.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.734 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:13:56.734 00:13:56.734 --- 10.0.0.1 ping statistics --- 00:13:56.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.734 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:56.734 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:56.735 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:56.735 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2178317 00:13:56.735 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:56.735 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:56.735 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2178317 00:13:56.735 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2178317 ']' 00:13:56.735 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.735 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.735 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.735 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.735 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:57.671 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.671 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:13:57.671 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:57.671 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:57.671 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:57.671 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:57.671 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.671 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:57.671 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.671 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:57.671 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.671 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:57.671 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:09.932 Initializing NVMe Controllers 00:14:09.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:09.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:09.932 Initialization complete. Launching workers. 00:14:09.932 ======================================================== 00:14:09.932 Latency(us) 00:14:09.932 Device Information : IOPS MiB/s Average min max 00:14:09.932 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14505.75 56.66 4411.38 717.36 16030.69 00:14:09.932 ======================================================== 00:14:09.932 Total : 14505.75 56.66 4411.38 717.36 16030.69 00:14:09.932 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:09.932 rmmod nvme_tcp 00:14:09.932 rmmod nvme_fabrics 00:14:09.932 rmmod nvme_keyring 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2178317 ']' 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2178317 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2178317 ']' 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2178317 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2178317 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2178317' 00:14:09.932 killing process with pid 2178317 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2178317 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2178317 00:14:09.932 nvmf threads initialize successfully 00:14:09.932 bdev subsystem init successfully 00:14:09.932 created a nvmf target service 00:14:09.932 create targets's poll groups done 00:14:09.932 all subsystems of target started 00:14:09.932 nvmf target is running 00:14:09.932 all subsystems of target stopped 00:14:09.932 destroy targets's poll groups done 00:14:09.932 destroyed the nvmf target service 00:14:09.932 bdev subsystem finish successfully 00:14:09.932 nvmf threads destroy successfully 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.932 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.190 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:10.190 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:10.190 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:10.190 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:10.451 00:14:10.451 real 0m16.339s 00:14:10.451 user 0m45.083s 00:14:10.451 sys 0m3.817s 00:14:10.451 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.451 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:10.451 ************************************ 00:14:10.451 END TEST nvmf_example 00:14:10.451 ************************************ 00:14:10.451 13:45:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.452 ************************************ 00:14:10.452 START TEST nvmf_filesystem 00:14:10.452 ************************************ 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:10.452 * Looking for test storage... 00:14:10.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:10.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.452 --rc genhtml_branch_coverage=1 00:14:10.452 --rc genhtml_function_coverage=1 00:14:10.452 --rc genhtml_legend=1 00:14:10.452 --rc geninfo_all_blocks=1 00:14:10.452 --rc geninfo_unexecuted_blocks=1 00:14:10.452 00:14:10.452 ' 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:10.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.452 --rc genhtml_branch_coverage=1 00:14:10.452 --rc genhtml_function_coverage=1 00:14:10.452 --rc genhtml_legend=1 00:14:10.452 --rc geninfo_all_blocks=1 00:14:10.452 --rc geninfo_unexecuted_blocks=1 00:14:10.452 00:14:10.452 ' 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:10.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.452 --rc genhtml_branch_coverage=1 00:14:10.452 --rc genhtml_function_coverage=1 00:14:10.452 --rc genhtml_legend=1 00:14:10.452 --rc geninfo_all_blocks=1 00:14:10.452 --rc geninfo_unexecuted_blocks=1 00:14:10.452 00:14:10.452 ' 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:10.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.452 --rc genhtml_branch_coverage=1 00:14:10.452 --rc genhtml_function_coverage=1 00:14:10.452 --rc genhtml_legend=1 00:14:10.452 --rc geninfo_all_blocks=1 00:14:10.452 --rc geninfo_unexecuted_blocks=1 00:14:10.452 00:14:10.452 ' 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:14:10.452 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:14:10.453 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:10.453 #define SPDK_CONFIG_H 00:14:10.453 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:10.453 #define SPDK_CONFIG_APPS 1 00:14:10.453 #define SPDK_CONFIG_ARCH native 00:14:10.453 #undef SPDK_CONFIG_ASAN 00:14:10.453 #undef SPDK_CONFIG_AVAHI 00:14:10.453 #undef SPDK_CONFIG_CET 00:14:10.453 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:10.453 #define SPDK_CONFIG_COVERAGE 1 00:14:10.453 #define SPDK_CONFIG_CROSS_PREFIX 00:14:10.453 #undef SPDK_CONFIG_CRYPTO 00:14:10.453 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:10.453 #undef SPDK_CONFIG_CUSTOMOCF 00:14:10.453 #undef SPDK_CONFIG_DAOS 00:14:10.453 #define SPDK_CONFIG_DAOS_DIR 00:14:10.453 #define SPDK_CONFIG_DEBUG 1 00:14:10.453 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:10.453 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:10.453 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:10.453 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:10.453 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:10.453 #undef SPDK_CONFIG_DPDK_UADK 00:14:10.453 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:10.453 #define SPDK_CONFIG_EXAMPLES 1 00:14:10.453 #undef SPDK_CONFIG_FC 00:14:10.453 #define SPDK_CONFIG_FC_PATH 00:14:10.453 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:10.453 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:10.453 #define SPDK_CONFIG_FSDEV 1 00:14:10.453 #undef SPDK_CONFIG_FUSE 00:14:10.453 #undef SPDK_CONFIG_FUZZER 00:14:10.453 #define SPDK_CONFIG_FUZZER_LIB 00:14:10.453 #undef SPDK_CONFIG_GOLANG 00:14:10.453 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:10.453 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:10.453 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:10.453 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:10.453 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:10.453 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:10.453 #undef SPDK_CONFIG_HAVE_LZ4 00:14:10.453 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:10.453 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:10.453 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:10.454 #define SPDK_CONFIG_IDXD 1 00:14:10.454 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:10.454 #undef SPDK_CONFIG_IPSEC_MB 00:14:10.454 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:10.454 #define SPDK_CONFIG_ISAL 1 00:14:10.454 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:10.454 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:10.454 #define SPDK_CONFIG_LIBDIR 00:14:10.454 #undef SPDK_CONFIG_LTO 00:14:10.454 #define SPDK_CONFIG_MAX_LCORES 128 00:14:10.454 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:14:10.454 #define SPDK_CONFIG_NVME_CUSE 1 00:14:10.454 #undef SPDK_CONFIG_OCF 00:14:10.454 #define SPDK_CONFIG_OCF_PATH 00:14:10.454 #define SPDK_CONFIG_OPENSSL_PATH 00:14:10.454 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:10.454 #define SPDK_CONFIG_PGO_DIR 00:14:10.454 #undef SPDK_CONFIG_PGO_USE 00:14:10.454 #define SPDK_CONFIG_PREFIX /usr/local 00:14:10.454 #undef SPDK_CONFIG_RAID5F 00:14:10.454 #undef SPDK_CONFIG_RBD 00:14:10.454 #define SPDK_CONFIG_RDMA 1 00:14:10.454 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:10.454 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:10.454 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:10.454 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:10.454 #define SPDK_CONFIG_SHARED 1 00:14:10.454 #undef SPDK_CONFIG_SMA 00:14:10.454 #define SPDK_CONFIG_TESTS 1 00:14:10.454 #undef SPDK_CONFIG_TSAN 00:14:10.454 #define SPDK_CONFIG_UBLK 1 00:14:10.454 #define SPDK_CONFIG_UBSAN 1 00:14:10.454 #undef SPDK_CONFIG_UNIT_TESTS 00:14:10.454 #undef SPDK_CONFIG_URING 00:14:10.454 #define SPDK_CONFIG_URING_PATH 00:14:10.454 #undef SPDK_CONFIG_URING_ZNS 00:14:10.454 #undef SPDK_CONFIG_USDT 00:14:10.454 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:10.454 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:10.454 #define SPDK_CONFIG_VFIO_USER 1 00:14:10.454 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:10.454 #define SPDK_CONFIG_VHOST 1 00:14:10.454 #define SPDK_CONFIG_VIRTIO 1 00:14:10.454 #undef SPDK_CONFIG_VTUNE 00:14:10.454 #define SPDK_CONFIG_VTUNE_DIR 00:14:10.454 #define SPDK_CONFIG_WERROR 1 00:14:10.454 #define SPDK_CONFIG_WPDK_DIR 00:14:10.454 #undef SPDK_CONFIG_XNVME 00:14:10.454 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:10.454 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:14:10.455 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:10.455 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:14:10.455 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:10.455 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:14:10.455 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:10.455 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:10.716 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:10.717 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:14:10.717 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2180025 ]] 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2180025 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.9HHavb 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.9HHavb/tests/target /tmp/spdk.9HHavb 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55216861184 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988515840 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6771654656 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984224768 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994255872 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375265280 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397703168 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22437888 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993629184 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=630784 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:14:10.718 * Looking for test storage... 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55216861184 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8986247168 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:14:10.718 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:10.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.719 --rc genhtml_branch_coverage=1 00:14:10.719 --rc genhtml_function_coverage=1 00:14:10.719 --rc genhtml_legend=1 00:14:10.719 --rc geninfo_all_blocks=1 00:14:10.719 --rc geninfo_unexecuted_blocks=1 00:14:10.719 00:14:10.719 ' 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:10.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.719 --rc genhtml_branch_coverage=1 00:14:10.719 --rc genhtml_function_coverage=1 00:14:10.719 --rc genhtml_legend=1 00:14:10.719 --rc geninfo_all_blocks=1 00:14:10.719 --rc geninfo_unexecuted_blocks=1 00:14:10.719 00:14:10.719 ' 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:10.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.719 --rc genhtml_branch_coverage=1 00:14:10.719 --rc genhtml_function_coverage=1 00:14:10.719 --rc genhtml_legend=1 00:14:10.719 --rc geninfo_all_blocks=1 00:14:10.719 --rc geninfo_unexecuted_blocks=1 00:14:10.719 00:14:10.719 ' 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:10.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.719 --rc genhtml_branch_coverage=1 00:14:10.719 --rc genhtml_function_coverage=1 00:14:10.719 --rc genhtml_legend=1 00:14:10.719 --rc geninfo_all_blocks=1 00:14:10.719 --rc geninfo_unexecuted_blocks=1 00:14:10.719 00:14:10.719 ' 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.719 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:10.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:14:10.720 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:13.255 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.255 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:13.256 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:13.256 Found net devices under 0000:09:00.0: cvl_0_0 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:13.256 Found net devices under 0000:09:00.1: cvl_0_1 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:13.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:14:13.256 00:14:13.256 --- 10.0.0.2 ping statistics --- 00:14:13.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.256 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:13.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:14:13.256 00:14:13.256 --- 10.0.0.1 ping statistics --- 00:14:13.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.256 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.256 ************************************ 00:14:13.256 START TEST nvmf_filesystem_no_in_capsule 00:14:13.256 ************************************ 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2181782 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2181782 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2181782 ']' 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.256 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.256 [2024-12-05 13:45:44.691901] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:14:13.256 [2024-12-05 13:45:44.691995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.256 [2024-12-05 13:45:44.764981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:13.516 [2024-12-05 13:45:44.819088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.516 [2024-12-05 13:45:44.819142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.516 [2024-12-05 13:45:44.819163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.516 [2024-12-05 13:45:44.819173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.516 [2024-12-05 13:45:44.819182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.516 [2024-12-05 13:45:44.820606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.516 [2024-12-05 13:45:44.820674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.516 [2024-12-05 13:45:44.820768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.516 [2024-12-05 13:45:44.820771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.516 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.516 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:14:13.516 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:13.516 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:13.516 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.516 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.516 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:13.516 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:13.517 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.517 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.517 [2024-12-05 13:45:44.968914] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.517 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.517 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:13.517 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.517 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.774 Malloc1 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.774 [2024-12-05 13:45:45.174183] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:13.774 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:14:13.775 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:14:13.775 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:13.775 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.775 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.775 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.775 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:13.775 { 00:14:13.775 "name": "Malloc1", 00:14:13.775 "aliases": [ 00:14:13.775 "d5234c92-8db5-4037-be1d-ce00cea9bdb9" 00:14:13.775 ], 00:14:13.775 "product_name": "Malloc disk", 00:14:13.775 "block_size": 512, 00:14:13.775 "num_blocks": 1048576, 00:14:13.775 "uuid": "d5234c92-8db5-4037-be1d-ce00cea9bdb9", 00:14:13.775 "assigned_rate_limits": { 00:14:13.775 "rw_ios_per_sec": 0, 00:14:13.775 "rw_mbytes_per_sec": 0, 00:14:13.775 "r_mbytes_per_sec": 0, 00:14:13.775 "w_mbytes_per_sec": 0 00:14:13.775 }, 00:14:13.775 "claimed": true, 00:14:13.775 "claim_type": "exclusive_write", 00:14:13.775 "zoned": false, 00:14:13.775 "supported_io_types": { 00:14:13.775 "read": true, 00:14:13.775 "write": true, 00:14:13.775 "unmap": true, 00:14:13.775 "flush": true, 00:14:13.775 "reset": true, 00:14:13.775 "nvme_admin": false, 00:14:13.775 "nvme_io": false, 00:14:13.775 "nvme_io_md": false, 00:14:13.775 "write_zeroes": true, 00:14:13.775 "zcopy": true, 00:14:13.775 "get_zone_info": false, 00:14:13.775 "zone_management": false, 00:14:13.775 "zone_append": false, 00:14:13.775 "compare": false, 00:14:13.775 "compare_and_write": false, 00:14:13.775 "abort": true, 00:14:13.775 "seek_hole": false, 00:14:13.775 "seek_data": false, 00:14:13.775 "copy": true, 00:14:13.775 "nvme_iov_md": false 00:14:13.775 }, 00:14:13.775 "memory_domains": [ 00:14:13.775 { 00:14:13.775 "dma_device_id": "system", 00:14:13.775 "dma_device_type": 1 00:14:13.775 }, 00:14:13.775 { 00:14:13.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.775 "dma_device_type": 2 00:14:13.775 } 00:14:13.775 ], 00:14:13.775 "driver_specific": {} 00:14:13.775 } 00:14:13.775 ]' 00:14:13.775 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:13.775 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:14:13.775 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:13.775 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:14:13.775 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:14:13.775 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:14:13.775 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:13.775 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:14.704 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:14.704 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:14:14.704 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.704 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:14.704 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:16.618 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:16.618 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:16.618 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:16.875 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:17.438 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:18.369 ************************************ 00:14:18.369 START TEST filesystem_ext4 00:14:18.369 ************************************ 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:14:18.369 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:18.369 mke2fs 1.47.0 (5-Feb-2023) 00:14:18.369 Discarding device blocks: 0/522240 done 00:14:18.369 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:18.369 Filesystem UUID: c367d2eb-8b39-4e46-bfee-5ada47aacb1b 00:14:18.369 Superblock backups stored on blocks: 00:14:18.369 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:18.369 00:14:18.369 Allocating group tables: 0/64 done 00:14:18.369 Writing inode tables: 0/64 done 00:14:20.267 Creating journal (8192 blocks): done 00:14:20.267 Writing superblocks and filesystem accounting information: 0/64 done 00:14:20.267 00:14:20.267 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:14:20.268 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2181782 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:25.529 00:14:25.529 real 0m7.201s 00:14:25.529 user 0m0.022s 00:14:25.529 sys 0m0.058s 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:25.529 ************************************ 00:14:25.529 END TEST filesystem_ext4 00:14:25.529 ************************************ 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:25.529 ************************************ 00:14:25.529 START TEST filesystem_btrfs 00:14:25.529 ************************************ 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:14:25.529 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:25.787 btrfs-progs v6.8.1 00:14:25.787 See https://btrfs.readthedocs.io for more information. 00:14:25.787 00:14:25.787 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:25.787 NOTE: several default settings have changed in version 5.15, please make sure 00:14:25.787 this does not affect your deployments: 00:14:25.787 - DUP for metadata (-m dup) 00:14:25.787 - enabled no-holes (-O no-holes) 00:14:25.787 - enabled free-space-tree (-R free-space-tree) 00:14:25.787 00:14:25.787 Label: (null) 00:14:25.787 UUID: 4178a50a-ca8c-4013-b660-f1f0e60b0068 00:14:25.787 Node size: 16384 00:14:25.787 Sector size: 4096 (CPU page size: 4096) 00:14:25.787 Filesystem size: 510.00MiB 00:14:25.787 Block group profiles: 00:14:25.787 Data: single 8.00MiB 00:14:25.787 Metadata: DUP 32.00MiB 00:14:25.787 System: DUP 8.00MiB 00:14:25.787 SSD detected: yes 00:14:25.787 Zoned device: no 00:14:25.787 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:25.787 Checksum: crc32c 00:14:25.787 Number of devices: 1 00:14:25.787 Devices: 00:14:25.787 ID SIZE PATH 00:14:25.787 1 510.00MiB /dev/nvme0n1p1 00:14:25.787 00:14:25.787 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:14:25.787 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:26.720 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:26.720 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:26.720 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:26.720 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:26.720 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2181782 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:26.721 00:14:26.721 real 0m1.234s 00:14:26.721 user 0m0.009s 00:14:26.721 sys 0m0.106s 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:26.721 ************************************ 00:14:26.721 END TEST filesystem_btrfs 00:14:26.721 ************************************ 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:26.721 ************************************ 00:14:26.721 START TEST filesystem_xfs 00:14:26.721 ************************************ 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:14:26.721 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:26.980 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:26.980 = sectsz=512 attr=2, projid32bit=1 00:14:26.980 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:26.980 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:26.980 data = bsize=4096 blocks=130560, imaxpct=25 00:14:26.980 = sunit=0 swidth=0 blks 00:14:26.980 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:26.980 log =internal log bsize=4096 blocks=16384, version=2 00:14:26.980 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:26.980 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:27.546 Discarding blocks...Done. 00:14:27.546 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:14:27.546 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2181782 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:29.445 00:14:29.445 real 0m2.534s 00:14:29.445 user 0m0.018s 00:14:29.445 sys 0m0.055s 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:29.445 ************************************ 00:14:29.445 END TEST filesystem_xfs 00:14:29.445 ************************************ 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:29.445 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2181782 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2181782 ']' 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2181782 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2181782 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2181782' 00:14:29.446 killing process with pid 2181782 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2181782 00:14:29.446 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2181782 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:30.014 00:14:30.014 real 0m16.775s 00:14:30.014 user 1m4.923s 00:14:30.014 sys 0m2.100s 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.014 ************************************ 00:14:30.014 END TEST nvmf_filesystem_no_in_capsule 00:14:30.014 ************************************ 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:30.014 ************************************ 00:14:30.014 START TEST nvmf_filesystem_in_capsule 00:14:30.014 ************************************ 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2183896 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2183896 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2183896 ']' 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.014 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.014 [2024-12-05 13:46:01.524263] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:14:30.014 [2024-12-05 13:46:01.524344] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.273 [2024-12-05 13:46:01.598533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.273 [2024-12-05 13:46:01.656541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.273 [2024-12-05 13:46:01.656597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.273 [2024-12-05 13:46:01.656610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.273 [2024-12-05 13:46:01.656621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.273 [2024-12-05 13:46:01.656631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.273 [2024-12-05 13:46:01.658080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.273 [2024-12-05 13:46:01.658139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.273 [2024-12-05 13:46:01.658207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.273 [2024-12-05 13:46:01.658210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.273 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.273 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:14:30.273 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:30.273 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:30.273 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.531 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.531 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:30.531 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:30.531 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.531 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.531 [2024-12-05 13:46:01.809215] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.531 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.531 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:30.531 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.531 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.531 Malloc1 00:14:30.531 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.531 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:30.531 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.531 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.531 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.532 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:30.532 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.532 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.532 [2024-12-05 13:46:02.008914] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:30.532 { 00:14:30.532 "name": "Malloc1", 00:14:30.532 "aliases": [ 00:14:30.532 "590ace4f-c6ca-44fa-af9c-bb5956eaabc4" 00:14:30.532 ], 00:14:30.532 "product_name": "Malloc disk", 00:14:30.532 "block_size": 512, 00:14:30.532 "num_blocks": 1048576, 00:14:30.532 "uuid": "590ace4f-c6ca-44fa-af9c-bb5956eaabc4", 00:14:30.532 "assigned_rate_limits": { 00:14:30.532 "rw_ios_per_sec": 0, 00:14:30.532 "rw_mbytes_per_sec": 0, 00:14:30.532 "r_mbytes_per_sec": 0, 00:14:30.532 "w_mbytes_per_sec": 0 00:14:30.532 }, 00:14:30.532 "claimed": true, 00:14:30.532 "claim_type": "exclusive_write", 00:14:30.532 "zoned": false, 00:14:30.532 "supported_io_types": { 00:14:30.532 "read": true, 00:14:30.532 "write": true, 00:14:30.532 "unmap": true, 00:14:30.532 "flush": true, 00:14:30.532 "reset": true, 00:14:30.532 "nvme_admin": false, 00:14:30.532 "nvme_io": false, 00:14:30.532 "nvme_io_md": false, 00:14:30.532 "write_zeroes": true, 00:14:30.532 "zcopy": true, 00:14:30.532 "get_zone_info": false, 00:14:30.532 "zone_management": false, 00:14:30.532 "zone_append": false, 00:14:30.532 "compare": false, 00:14:30.532 "compare_and_write": false, 00:14:30.532 "abort": true, 00:14:30.532 "seek_hole": false, 00:14:30.532 "seek_data": false, 00:14:30.532 "copy": true, 00:14:30.532 "nvme_iov_md": false 00:14:30.532 }, 00:14:30.532 "memory_domains": [ 00:14:30.532 { 00:14:30.532 "dma_device_id": "system", 00:14:30.532 "dma_device_type": 1 00:14:30.532 }, 00:14:30.532 { 00:14:30.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.532 "dma_device_type": 2 00:14:30.532 } 00:14:30.532 ], 00:14:30.532 "driver_specific": {} 00:14:30.532 } 00:14:30.532 ]' 00:14:30.532 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:30.789 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:14:30.789 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:30.789 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:14:30.789 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:14:30.789 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:14:30.789 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:30.789 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:31.355 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:31.355 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:14:31.355 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:31.355 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:31.355 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:33.251 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:33.510 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:34.099 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:35.028 ************************************ 00:14:35.028 START TEST filesystem_in_capsule_ext4 00:14:35.028 ************************************ 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:14:35.028 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:35.028 mke2fs 1.47.0 (5-Feb-2023) 00:14:35.284 Discarding device blocks: 0/522240 done 00:14:35.284 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:35.284 Filesystem UUID: e127e4b9-ba68-4729-bee3-1e5960220bfa 00:14:35.284 Superblock backups stored on blocks: 00:14:35.284 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:35.284 00:14:35.284 Allocating group tables: 0/64 done 00:14:35.284 Writing inode tables: 0/64 done 00:14:35.284 Creating journal (8192 blocks): done 00:14:35.284 Writing superblocks and filesystem accounting information: 0/64 done 00:14:35.284 00:14:35.284 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:14:35.284 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2183896 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:41.837 00:14:41.837 real 0m6.286s 00:14:41.837 user 0m0.021s 00:14:41.837 sys 0m0.059s 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:41.837 ************************************ 00:14:41.837 END TEST filesystem_in_capsule_ext4 00:14:41.837 ************************************ 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:41.837 ************************************ 00:14:41.837 START TEST filesystem_in_capsule_btrfs 00:14:41.837 ************************************ 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:14:41.837 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:41.837 btrfs-progs v6.8.1 00:14:41.837 See https://btrfs.readthedocs.io for more information. 00:14:41.837 00:14:41.837 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:41.837 NOTE: several default settings have changed in version 5.15, please make sure 00:14:41.837 this does not affect your deployments: 00:14:41.837 - DUP for metadata (-m dup) 00:14:41.837 - enabled no-holes (-O no-holes) 00:14:41.837 - enabled free-space-tree (-R free-space-tree) 00:14:41.837 00:14:41.837 Label: (null) 00:14:41.837 UUID: 6208606a-deff-423e-a9a1-0bc086026cf4 00:14:41.837 Node size: 16384 00:14:41.837 Sector size: 4096 (CPU page size: 4096) 00:14:41.837 Filesystem size: 510.00MiB 00:14:41.837 Block group profiles: 00:14:41.837 Data: single 8.00MiB 00:14:41.837 Metadata: DUP 32.00MiB 00:14:41.837 System: DUP 8.00MiB 00:14:41.837 SSD detected: yes 00:14:41.837 Zoned device: no 00:14:41.838 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:41.838 Checksum: crc32c 00:14:41.838 Number of devices: 1 00:14:41.838 Devices: 00:14:41.838 ID SIZE PATH 00:14:41.838 1 510.00MiB /dev/nvme0n1p1 00:14:41.838 00:14:41.838 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:14:41.838 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:42.403 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:42.403 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:42.403 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:42.403 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:42.403 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:42.403 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2183896 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:42.661 00:14:42.661 real 0m1.153s 00:14:42.661 user 0m0.019s 00:14:42.661 sys 0m0.095s 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:42.661 ************************************ 00:14:42.661 END TEST filesystem_in_capsule_btrfs 00:14:42.661 ************************************ 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:42.661 ************************************ 00:14:42.661 START TEST filesystem_in_capsule_xfs 00:14:42.661 ************************************ 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:14:42.661 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:42.661 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:42.661 = sectsz=512 attr=2, projid32bit=1 00:14:42.661 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:42.661 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:42.661 data = bsize=4096 blocks=130560, imaxpct=25 00:14:42.661 = sunit=0 swidth=0 blks 00:14:42.661 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:42.661 log =internal log bsize=4096 blocks=16384, version=2 00:14:42.661 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:42.661 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:43.593 Discarding blocks...Done. 00:14:43.593 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:14:43.593 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:46.115 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:46.115 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:46.115 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:46.115 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:46.115 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:46.115 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:46.115 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2183896 00:14:46.115 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:46.115 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:46.115 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:46.115 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:46.115 00:14:46.115 real 0m3.337s 00:14:46.115 user 0m0.016s 00:14:46.115 sys 0m0.061s 00:14:46.115 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.115 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:46.115 ************************************ 00:14:46.115 END TEST filesystem_in_capsule_xfs 00:14:46.115 ************************************ 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2183896 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2183896 ']' 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2183896 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2183896 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2183896' 00:14:46.116 killing process with pid 2183896 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2183896 00:14:46.116 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2183896 00:14:46.683 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:46.683 00:14:46.683 real 0m16.518s 00:14:46.683 user 1m3.945s 00:14:46.683 sys 0m2.063s 00:14:46.683 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.683 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:46.683 ************************************ 00:14:46.683 END TEST nvmf_filesystem_in_capsule 00:14:46.683 ************************************ 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:46.683 rmmod nvme_tcp 00:14:46.683 rmmod nvme_fabrics 00:14:46.683 rmmod nvme_keyring 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.683 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.585 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:48.845 00:14:48.845 real 0m38.335s 00:14:48.845 user 2m10.080s 00:14:48.845 sys 0m6.017s 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:48.845 ************************************ 00:14:48.845 END TEST nvmf_filesystem 00:14:48.845 ************************************ 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.845 ************************************ 00:14:48.845 START TEST nvmf_target_discovery 00:14:48.845 ************************************ 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:48.845 * Looking for test storage... 00:14:48.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:48.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.845 --rc genhtml_branch_coverage=1 00:14:48.845 --rc genhtml_function_coverage=1 00:14:48.845 --rc genhtml_legend=1 00:14:48.845 --rc geninfo_all_blocks=1 00:14:48.845 --rc geninfo_unexecuted_blocks=1 00:14:48.845 00:14:48.845 ' 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:48.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.845 --rc genhtml_branch_coverage=1 00:14:48.845 --rc genhtml_function_coverage=1 00:14:48.845 --rc genhtml_legend=1 00:14:48.845 --rc geninfo_all_blocks=1 00:14:48.845 --rc geninfo_unexecuted_blocks=1 00:14:48.845 00:14:48.845 ' 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:48.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.845 --rc genhtml_branch_coverage=1 00:14:48.845 --rc genhtml_function_coverage=1 00:14:48.845 --rc genhtml_legend=1 00:14:48.845 --rc geninfo_all_blocks=1 00:14:48.845 --rc geninfo_unexecuted_blocks=1 00:14:48.845 00:14:48.845 ' 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:48.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.845 --rc genhtml_branch_coverage=1 00:14:48.845 --rc genhtml_function_coverage=1 00:14:48.845 --rc genhtml_legend=1 00:14:48.845 --rc geninfo_all_blocks=1 00:14:48.845 --rc geninfo_unexecuted_blocks=1 00:14:48.845 00:14:48.845 ' 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.845 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:14:48.846 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.374 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:51.375 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:51.375 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:51.375 Found net devices under 0000:09:00.0: cvl_0_0 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:51.375 Found net devices under 0000:09:00.1: cvl_0_1 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:51.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:14:51.375 00:14:51.375 --- 10.0.0.2 ping statistics --- 00:14:51.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.375 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:14:51.375 00:14:51.375 --- 10.0.0.1 ping statistics --- 00:14:51.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.375 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2188047 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2188047 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2188047 ']' 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.375 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.375 [2024-12-05 13:46:22.768018] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:14:51.375 [2024-12-05 13:46:22.768096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.375 [2024-12-05 13:46:22.840277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.632 [2024-12-05 13:46:22.898405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.632 [2024-12-05 13:46:22.898475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.632 [2024-12-05 13:46:22.898490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.632 [2024-12-05 13:46:22.898502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.632 [2024-12-05 13:46:22.898512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.632 [2024-12-05 13:46:22.900288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.632 [2024-12-05 13:46:22.900315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.632 [2024-12-05 13:46:22.900364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.632 [2024-12-05 13:46:22.900367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.632 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.632 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:51.632 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:51.632 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:51.632 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 [2024-12-05 13:46:23.044598] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 Null1 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 [2024-12-05 13:46:23.092579] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 Null2 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 Null3 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.633 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.890 Null4 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.890 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:14:51.890 00:14:51.890 Discovery Log Number of Records 6, Generation counter 6 00:14:51.890 =====Discovery Log Entry 0====== 00:14:51.890 trtype: tcp 00:14:51.890 adrfam: ipv4 00:14:51.890 subtype: current discovery subsystem 00:14:51.890 treq: not required 00:14:51.890 portid: 0 00:14:51.890 trsvcid: 4420 00:14:51.890 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:51.890 traddr: 10.0.0.2 00:14:51.890 eflags: explicit discovery connections, duplicate discovery information 00:14:51.890 sectype: none 00:14:51.890 =====Discovery Log Entry 1====== 00:14:51.890 trtype: tcp 00:14:51.890 adrfam: ipv4 00:14:51.890 subtype: nvme subsystem 00:14:51.890 treq: not required 00:14:51.890 portid: 0 00:14:51.890 trsvcid: 4420 00:14:51.890 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:51.890 traddr: 10.0.0.2 00:14:51.890 eflags: none 00:14:51.890 sectype: none 00:14:51.890 =====Discovery Log Entry 2====== 00:14:51.890 trtype: tcp 00:14:51.890 adrfam: ipv4 00:14:51.890 subtype: nvme subsystem 00:14:51.890 treq: not required 00:14:51.890 portid: 0 00:14:51.890 trsvcid: 4420 00:14:51.890 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:51.890 traddr: 10.0.0.2 00:14:51.890 eflags: none 00:14:51.890 sectype: none 00:14:51.890 =====Discovery Log Entry 3====== 00:14:51.890 trtype: tcp 00:14:51.890 adrfam: ipv4 00:14:51.890 subtype: nvme subsystem 00:14:51.890 treq: not required 00:14:51.890 portid: 0 00:14:51.890 trsvcid: 4420 00:14:51.890 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:51.890 traddr: 10.0.0.2 00:14:51.890 eflags: none 00:14:51.890 sectype: none 00:14:51.890 =====Discovery Log Entry 4====== 00:14:51.890 trtype: tcp 00:14:51.890 adrfam: ipv4 00:14:51.890 subtype: nvme subsystem 00:14:51.890 treq: not required 00:14:51.890 portid: 0 00:14:51.890 trsvcid: 4420 00:14:51.890 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:51.890 traddr: 10.0.0.2 00:14:51.890 eflags: none 00:14:51.890 sectype: none 00:14:51.890 =====Discovery Log Entry 5====== 00:14:51.890 trtype: tcp 00:14:51.890 adrfam: ipv4 00:14:51.890 subtype: discovery subsystem referral 00:14:51.890 treq: not required 00:14:51.890 portid: 0 00:14:51.890 trsvcid: 4430 00:14:51.890 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:51.890 traddr: 10.0.0.2 00:14:51.891 eflags: none 00:14:51.891 sectype: none 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:51.891 Perform nvmf subsystem discovery via RPC 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.891 [ 00:14:51.891 { 00:14:51.891 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:51.891 "subtype": "Discovery", 00:14:51.891 "listen_addresses": [ 00:14:51.891 { 00:14:51.891 "trtype": "TCP", 00:14:51.891 "adrfam": "IPv4", 00:14:51.891 "traddr": "10.0.0.2", 00:14:51.891 "trsvcid": "4420" 00:14:51.891 } 00:14:51.891 ], 00:14:51.891 "allow_any_host": true, 00:14:51.891 "hosts": [] 00:14:51.891 }, 00:14:51.891 { 00:14:51.891 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:51.891 "subtype": "NVMe", 00:14:51.891 "listen_addresses": [ 00:14:51.891 { 00:14:51.891 "trtype": "TCP", 00:14:51.891 "adrfam": "IPv4", 00:14:51.891 "traddr": "10.0.0.2", 00:14:51.891 "trsvcid": "4420" 00:14:51.891 } 00:14:51.891 ], 00:14:51.891 "allow_any_host": true, 00:14:51.891 "hosts": [], 00:14:51.891 "serial_number": "SPDK00000000000001", 00:14:51.891 "model_number": "SPDK bdev Controller", 00:14:51.891 "max_namespaces": 32, 00:14:51.891 "min_cntlid": 1, 00:14:51.891 "max_cntlid": 65519, 00:14:51.891 "namespaces": [ 00:14:51.891 { 00:14:51.891 "nsid": 1, 00:14:51.891 "bdev_name": "Null1", 00:14:51.891 "name": "Null1", 00:14:51.891 "nguid": "FA16F6583E964B65A2EAAF3BB688E380", 00:14:51.891 "uuid": "fa16f658-3e96-4b65-a2ea-af3bb688e380" 00:14:51.891 } 00:14:51.891 ] 00:14:51.891 }, 00:14:51.891 { 00:14:51.891 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:51.891 "subtype": "NVMe", 00:14:51.891 "listen_addresses": [ 00:14:51.891 { 00:14:51.891 "trtype": "TCP", 00:14:51.891 "adrfam": "IPv4", 00:14:51.891 "traddr": "10.0.0.2", 00:14:51.891 "trsvcid": "4420" 00:14:51.891 } 00:14:51.891 ], 00:14:51.891 "allow_any_host": true, 00:14:51.891 "hosts": [], 00:14:51.891 "serial_number": "SPDK00000000000002", 00:14:51.891 "model_number": "SPDK bdev Controller", 00:14:51.891 "max_namespaces": 32, 00:14:51.891 "min_cntlid": 1, 00:14:51.891 "max_cntlid": 65519, 00:14:51.891 "namespaces": [ 00:14:51.891 { 00:14:51.891 "nsid": 1, 00:14:51.891 "bdev_name": "Null2", 00:14:51.891 "name": "Null2", 00:14:51.891 "nguid": "B073759A79344858AEE51B9D1A555D63", 00:14:51.891 "uuid": "b073759a-7934-4858-aee5-1b9d1a555d63" 00:14:51.891 } 00:14:51.891 ] 00:14:51.891 }, 00:14:51.891 { 00:14:51.891 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:51.891 "subtype": "NVMe", 00:14:51.891 "listen_addresses": [ 00:14:51.891 { 00:14:51.891 "trtype": "TCP", 00:14:51.891 "adrfam": "IPv4", 00:14:51.891 "traddr": "10.0.0.2", 00:14:51.891 "trsvcid": "4420" 00:14:51.891 } 00:14:51.891 ], 00:14:51.891 "allow_any_host": true, 00:14:51.891 "hosts": [], 00:14:51.891 "serial_number": "SPDK00000000000003", 00:14:51.891 "model_number": "SPDK bdev Controller", 00:14:51.891 "max_namespaces": 32, 00:14:51.891 "min_cntlid": 1, 00:14:51.891 "max_cntlid": 65519, 00:14:51.891 "namespaces": [ 00:14:51.891 { 00:14:51.891 "nsid": 1, 00:14:51.891 "bdev_name": "Null3", 00:14:51.891 "name": "Null3", 00:14:51.891 "nguid": "99C4D74B75E64456B4ED3F678C31AF03", 00:14:51.891 "uuid": "99c4d74b-75e6-4456-b4ed-3f678c31af03" 00:14:51.891 } 00:14:51.891 ] 00:14:51.891 }, 00:14:51.891 { 00:14:51.891 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:51.891 "subtype": "NVMe", 00:14:51.891 "listen_addresses": [ 00:14:51.891 { 00:14:51.891 "trtype": "TCP", 00:14:51.891 "adrfam": "IPv4", 00:14:51.891 "traddr": "10.0.0.2", 00:14:51.891 "trsvcid": "4420" 00:14:51.891 } 00:14:51.891 ], 00:14:51.891 "allow_any_host": true, 00:14:51.891 "hosts": [], 00:14:51.891 "serial_number": "SPDK00000000000004", 00:14:51.891 "model_number": "SPDK bdev Controller", 00:14:51.891 "max_namespaces": 32, 00:14:51.891 "min_cntlid": 1, 00:14:51.891 "max_cntlid": 65519, 00:14:51.891 "namespaces": [ 00:14:51.891 { 00:14:51.891 "nsid": 1, 00:14:51.891 "bdev_name": "Null4", 00:14:51.891 "name": "Null4", 00:14:51.891 "nguid": "5729F833586B40D7B8F2A4985FD2C328", 00:14:51.891 "uuid": "5729f833-586b-40d7-b8f2-a4985fd2c328" 00:14:51.891 } 00:14:51.891 ] 00:14:51.891 } 00:14:51.891 ] 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.891 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:52.149 rmmod nvme_tcp 00:14:52.149 rmmod nvme_fabrics 00:14:52.149 rmmod nvme_keyring 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2188047 ']' 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2188047 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2188047 ']' 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2188047 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2188047 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2188047' 00:14:52.149 killing process with pid 2188047 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2188047 00:14:52.149 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2188047 00:14:52.409 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:52.409 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:52.409 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:52.409 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:14:52.409 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:14:52.409 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:52.409 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:14:52.409 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:52.409 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:52.409 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.409 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.409 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.940 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:54.940 00:14:54.940 real 0m5.680s 00:14:54.940 user 0m4.527s 00:14:54.940 sys 0m2.030s 00:14:54.940 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.940 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:54.940 ************************************ 00:14:54.940 END TEST nvmf_target_discovery 00:14:54.940 ************************************ 00:14:54.940 13:46:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:54.940 13:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:54.941 13:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.941 13:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:54.941 ************************************ 00:14:54.941 START TEST nvmf_referrals 00:14:54.941 ************************************ 00:14:54.941 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:54.941 * Looking for test storage... 00:14:54.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.941 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:54.941 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:14:54.941 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:54.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.941 --rc genhtml_branch_coverage=1 00:14:54.941 --rc genhtml_function_coverage=1 00:14:54.941 --rc genhtml_legend=1 00:14:54.941 --rc geninfo_all_blocks=1 00:14:54.941 --rc geninfo_unexecuted_blocks=1 00:14:54.941 00:14:54.941 ' 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:54.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.941 --rc genhtml_branch_coverage=1 00:14:54.941 --rc genhtml_function_coverage=1 00:14:54.941 --rc genhtml_legend=1 00:14:54.941 --rc geninfo_all_blocks=1 00:14:54.941 --rc geninfo_unexecuted_blocks=1 00:14:54.941 00:14:54.941 ' 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:54.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.941 --rc genhtml_branch_coverage=1 00:14:54.941 --rc genhtml_function_coverage=1 00:14:54.941 --rc genhtml_legend=1 00:14:54.941 --rc geninfo_all_blocks=1 00:14:54.941 --rc geninfo_unexecuted_blocks=1 00:14:54.941 00:14:54.941 ' 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:54.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.941 --rc genhtml_branch_coverage=1 00:14:54.941 --rc genhtml_function_coverage=1 00:14:54.941 --rc genhtml_legend=1 00:14:54.941 --rc geninfo_all_blocks=1 00:14:54.941 --rc geninfo_unexecuted_blocks=1 00:14:54.941 00:14:54.941 ' 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.941 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:54.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:14:54.942 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:56.844 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:56.844 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:56.844 Found net devices under 0000:09:00.0: cvl_0_0 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:56.844 Found net devices under 0000:09:00.1: cvl_0_1 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.844 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:57.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:14:57.102 00:14:57.102 --- 10.0.0.2 ping statistics --- 00:14:57.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.102 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:57.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:14:57.102 00:14:57.102 --- 10.0.0.1 ping statistics --- 00:14:57.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.102 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2190144 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2190144 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2190144 ']' 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.102 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.102 [2024-12-05 13:46:28.502982] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:14:57.102 [2024-12-05 13:46:28.503068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.102 [2024-12-05 13:46:28.575238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.360 [2024-12-05 13:46:28.635642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.360 [2024-12-05 13:46:28.635687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.360 [2024-12-05 13:46:28.635716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.360 [2024-12-05 13:46:28.635727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.360 [2024-12-05 13:46:28.635737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.360 [2024-12-05 13:46:28.637379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.360 [2024-12-05 13:46:28.637442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.360 [2024-12-05 13:46:28.637469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.360 [2024-12-05 13:46:28.637473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.360 [2024-12-05 13:46:28.797872] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.360 [2024-12-05 13:46:28.819614] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.360 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.617 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:57.617 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:57.617 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:57.617 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:57.617 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:57.617 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.618 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.618 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:57.618 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.618 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:57.618 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:57.618 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:57.618 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:57.618 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:57.618 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:57.618 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:57.618 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:57.875 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:58.133 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:58.134 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:58.134 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.134 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:58.134 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:58.391 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:58.391 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:58.391 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:58.391 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:58.391 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:58.391 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.391 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:58.391 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:58.391 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:58.391 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:58.391 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:58.391 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.391 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:58.649 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:58.907 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:58.907 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:58.907 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:58.907 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:58.907 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:58.907 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.907 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:58.907 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:58.908 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:58.908 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:58.908 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:58.908 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.908 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:59.166 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:59.424 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:59.424 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:59.424 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:59.424 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:59.424 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:59.425 rmmod nvme_tcp 00:14:59.425 rmmod nvme_fabrics 00:14:59.425 rmmod nvme_keyring 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2190144 ']' 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2190144 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2190144 ']' 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2190144 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2190144 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2190144' 00:14:59.425 killing process with pid 2190144 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2190144 00:14:59.425 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2190144 00:14:59.685 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:59.685 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:59.685 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:59.685 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:14:59.685 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:14:59.685 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:59.685 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:14:59.685 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:59.685 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:59.685 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.685 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.685 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.636 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:01.636 00:15:01.636 real 0m7.265s 00:15:01.636 user 0m11.371s 00:15:01.636 sys 0m2.463s 00:15:01.636 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.636 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:01.636 ************************************ 00:15:01.636 END TEST nvmf_referrals 00:15:01.636 ************************************ 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:01.896 ************************************ 00:15:01.896 START TEST nvmf_connect_disconnect 00:15:01.896 ************************************ 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:01.896 * Looking for test storage... 00:15:01.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:01.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.896 --rc genhtml_branch_coverage=1 00:15:01.896 --rc genhtml_function_coverage=1 00:15:01.896 --rc genhtml_legend=1 00:15:01.896 --rc geninfo_all_blocks=1 00:15:01.896 --rc geninfo_unexecuted_blocks=1 00:15:01.896 00:15:01.896 ' 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:01.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.896 --rc genhtml_branch_coverage=1 00:15:01.896 --rc genhtml_function_coverage=1 00:15:01.896 --rc genhtml_legend=1 00:15:01.896 --rc geninfo_all_blocks=1 00:15:01.896 --rc geninfo_unexecuted_blocks=1 00:15:01.896 00:15:01.896 ' 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:01.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.896 --rc genhtml_branch_coverage=1 00:15:01.896 --rc genhtml_function_coverage=1 00:15:01.896 --rc genhtml_legend=1 00:15:01.896 --rc geninfo_all_blocks=1 00:15:01.896 --rc geninfo_unexecuted_blocks=1 00:15:01.896 00:15:01.896 ' 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:01.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.896 --rc genhtml_branch_coverage=1 00:15:01.896 --rc genhtml_function_coverage=1 00:15:01.896 --rc genhtml_legend=1 00:15:01.896 --rc geninfo_all_blocks=1 00:15:01.896 --rc geninfo_unexecuted_blocks=1 00:15:01.896 00:15:01.896 ' 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.896 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:01.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:15:01.897 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:04.443 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:04.443 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:04.443 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:04.444 Found net devices under 0000:09:00.0: cvl_0_0 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:04.444 Found net devices under 0000:09:00.1: cvl_0_1 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:04.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:15:04.444 00:15:04.444 --- 10.0.0.2 ping statistics --- 00:15:04.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.444 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:15:04.444 00:15:04.444 --- 10.0.0.1 ping statistics --- 00:15:04.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.444 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2192456 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2192456 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2192456 ']' 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.444 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:04.444 [2024-12-05 13:46:35.833407] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:15:04.444 [2024-12-05 13:46:35.833509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.444 [2024-12-05 13:46:35.904493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.444 [2024-12-05 13:46:35.957923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.444 [2024-12-05 13:46:35.957980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.444 [2024-12-05 13:46:35.957995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.444 [2024-12-05 13:46:35.958021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.444 [2024-12-05 13:46:35.958031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.444 [2024-12-05 13:46:35.959556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.444 [2024-12-05 13:46:35.959583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.444 [2024-12-05 13:46:35.959633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.444 [2024-12-05 13:46:35.959636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:04.703 [2024-12-05 13:46:36.104135] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:04.703 [2024-12-05 13:46:36.169043] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:15:04.703 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:15:07.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.900 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:18.900 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:18.900 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:18.900 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:18.900 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:18.900 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:18.901 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:18.901 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:18.901 rmmod nvme_tcp 00:15:18.901 rmmod nvme_fabrics 00:15:18.901 rmmod nvme_keyring 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2192456 ']' 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2192456 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2192456 ']' 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2192456 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2192456 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2192456' 00:15:18.901 killing process with pid 2192456 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2192456 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2192456 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:18.901 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:21.442 00:15:21.442 real 0m19.161s 00:15:21.442 user 0m57.098s 00:15:21.442 sys 0m3.468s 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:21.442 ************************************ 00:15:21.442 END TEST nvmf_connect_disconnect 00:15:21.442 ************************************ 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:21.442 ************************************ 00:15:21.442 START TEST nvmf_multitarget 00:15:21.442 ************************************ 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:21.442 * Looking for test storage... 00:15:21.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.442 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:21.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.443 --rc genhtml_branch_coverage=1 00:15:21.443 --rc genhtml_function_coverage=1 00:15:21.443 --rc genhtml_legend=1 00:15:21.443 --rc geninfo_all_blocks=1 00:15:21.443 --rc geninfo_unexecuted_blocks=1 00:15:21.443 00:15:21.443 ' 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:21.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.443 --rc genhtml_branch_coverage=1 00:15:21.443 --rc genhtml_function_coverage=1 00:15:21.443 --rc genhtml_legend=1 00:15:21.443 --rc geninfo_all_blocks=1 00:15:21.443 --rc geninfo_unexecuted_blocks=1 00:15:21.443 00:15:21.443 ' 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:21.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.443 --rc genhtml_branch_coverage=1 00:15:21.443 --rc genhtml_function_coverage=1 00:15:21.443 --rc genhtml_legend=1 00:15:21.443 --rc geninfo_all_blocks=1 00:15:21.443 --rc geninfo_unexecuted_blocks=1 00:15:21.443 00:15:21.443 ' 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:21.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.443 --rc genhtml_branch_coverage=1 00:15:21.443 --rc genhtml_function_coverage=1 00:15:21.443 --rc genhtml_legend=1 00:15:21.443 --rc geninfo_all_blocks=1 00:15:21.443 --rc geninfo_unexecuted_blocks=1 00:15:21.443 00:15:21.443 ' 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:21.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:21.443 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:21.444 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:23.350 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:23.350 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:23.350 Found net devices under 0000:09:00.0: cvl_0_0 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:23.350 Found net devices under 0000:09:00.1: cvl_0_1 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:23.350 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:23.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:23.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:15:23.351 00:15:23.351 --- 10.0.0.2 ping statistics --- 00:15:23.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.351 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:15:23.351 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:23.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:23.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:15:23.351 00:15:23.351 --- 10.0.0.1 ping statistics --- 00:15:23.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.351 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:15:23.351 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:23.351 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:15:23.351 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:23.351 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:23.351 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:23.351 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:23.351 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:23.351 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:23.351 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:23.609 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:23.609 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:23.609 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:23.609 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:23.609 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2196228 00:15:23.609 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:23.609 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2196228 00:15:23.609 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2196228 ']' 00:15:23.609 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.609 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.609 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.609 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.609 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:23.609 [2024-12-05 13:46:54.936455] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:15:23.609 [2024-12-05 13:46:54.936539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.609 [2024-12-05 13:46:55.009452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.609 [2024-12-05 13:46:55.064839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.609 [2024-12-05 13:46:55.064885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.609 [2024-12-05 13:46:55.064919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.609 [2024-12-05 13:46:55.064930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.609 [2024-12-05 13:46:55.064940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.609 [2024-12-05 13:46:55.066522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.609 [2024-12-05 13:46:55.066580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.609 [2024-12-05 13:46:55.066631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.609 [2024-12-05 13:46:55.066634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.868 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.868 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:15:23.868 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:23.868 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:23.868 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:23.868 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.868 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:23.868 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:23.868 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:23.868 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:23.868 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:24.127 "nvmf_tgt_1" 00:15:24.127 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:24.127 "nvmf_tgt_2" 00:15:24.127 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:24.127 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:24.384 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:24.384 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:24.384 true 00:15:24.384 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:24.641 true 00:15:24.641 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:24.641 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:24.641 rmmod nvme_tcp 00:15:24.641 rmmod nvme_fabrics 00:15:24.641 rmmod nvme_keyring 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2196228 ']' 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2196228 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2196228 ']' 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2196228 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.641 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2196228 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2196228' 00:15:24.898 killing process with pid 2196228 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2196228 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2196228 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.898 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:27.432 00:15:27.432 real 0m6.028s 00:15:27.432 user 0m7.040s 00:15:27.432 sys 0m2.042s 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:27.432 ************************************ 00:15:27.432 END TEST nvmf_multitarget 00:15:27.432 ************************************ 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.432 ************************************ 00:15:27.432 START TEST nvmf_rpc 00:15:27.432 ************************************ 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:27.432 * Looking for test storage... 00:15:27.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:27.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.432 --rc genhtml_branch_coverage=1 00:15:27.432 --rc genhtml_function_coverage=1 00:15:27.432 --rc genhtml_legend=1 00:15:27.432 --rc geninfo_all_blocks=1 00:15:27.432 --rc geninfo_unexecuted_blocks=1 00:15:27.432 00:15:27.432 ' 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:27.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.432 --rc genhtml_branch_coverage=1 00:15:27.432 --rc genhtml_function_coverage=1 00:15:27.432 --rc genhtml_legend=1 00:15:27.432 --rc geninfo_all_blocks=1 00:15:27.432 --rc geninfo_unexecuted_blocks=1 00:15:27.432 00:15:27.432 ' 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:27.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.432 --rc genhtml_branch_coverage=1 00:15:27.432 --rc genhtml_function_coverage=1 00:15:27.432 --rc genhtml_legend=1 00:15:27.432 --rc geninfo_all_blocks=1 00:15:27.432 --rc geninfo_unexecuted_blocks=1 00:15:27.432 00:15:27.432 ' 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:27.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.432 --rc genhtml_branch_coverage=1 00:15:27.432 --rc genhtml_function_coverage=1 00:15:27.432 --rc genhtml_legend=1 00:15:27.432 --rc geninfo_all_blocks=1 00:15:27.432 --rc geninfo_unexecuted_blocks=1 00:15:27.432 00:15:27.432 ' 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.432 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:15:27.433 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:29.349 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:29.349 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:29.349 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:29.350 Found net devices under 0000:09:00.0: cvl_0_0 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:29.350 Found net devices under 0000:09:00.1: cvl_0_1 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:29.350 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:29.646 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:29.646 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:29.646 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:29.646 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:29.646 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:29.646 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:29.646 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:29.646 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:29.646 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:29.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:15:29.646 00:15:29.646 --- 10.0.0.2 ping statistics --- 00:15:29.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.646 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:15:29.646 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:29.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:15:29.646 00:15:29.646 --- 10.0.0.1 ping statistics --- 00:15:29.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.646 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:15:29.647 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.647 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:15:29.647 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:29.647 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.647 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:29.647 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:29.647 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.647 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:29.647 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:29.647 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:29.647 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:29.647 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:29.647 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.647 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2198333 00:15:29.647 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:29.647 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2198333 00:15:29.647 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2198333 ']' 00:15:29.647 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.647 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.647 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.647 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.647 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.647 [2024-12-05 13:47:01.071282] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:15:29.647 [2024-12-05 13:47:01.071374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.647 [2024-12-05 13:47:01.141884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.923 [2024-12-05 13:47:01.203503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.923 [2024-12-05 13:47:01.203550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.923 [2024-12-05 13:47:01.203565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.923 [2024-12-05 13:47:01.203577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.923 [2024-12-05 13:47:01.203587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.923 [2024-12-05 13:47:01.205200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.923 [2024-12-05 13:47:01.205269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.923 [2024-12-05 13:47:01.205323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.923 [2024-12-05 13:47:01.205332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:29.923 "tick_rate": 2700000000, 00:15:29.923 "poll_groups": [ 00:15:29.923 { 00:15:29.923 "name": "nvmf_tgt_poll_group_000", 00:15:29.923 "admin_qpairs": 0, 00:15:29.923 "io_qpairs": 0, 00:15:29.923 "current_admin_qpairs": 0, 00:15:29.923 "current_io_qpairs": 0, 00:15:29.923 "pending_bdev_io": 0, 00:15:29.923 "completed_nvme_io": 0, 00:15:29.923 "transports": [] 00:15:29.923 }, 00:15:29.923 { 00:15:29.923 "name": "nvmf_tgt_poll_group_001", 00:15:29.923 "admin_qpairs": 0, 00:15:29.923 "io_qpairs": 0, 00:15:29.923 "current_admin_qpairs": 0, 00:15:29.923 "current_io_qpairs": 0, 00:15:29.923 "pending_bdev_io": 0, 00:15:29.923 "completed_nvme_io": 0, 00:15:29.923 "transports": [] 00:15:29.923 }, 00:15:29.923 { 00:15:29.923 "name": "nvmf_tgt_poll_group_002", 00:15:29.923 "admin_qpairs": 0, 00:15:29.923 "io_qpairs": 0, 00:15:29.923 "current_admin_qpairs": 0, 00:15:29.923 "current_io_qpairs": 0, 00:15:29.923 "pending_bdev_io": 0, 00:15:29.923 "completed_nvme_io": 0, 00:15:29.923 "transports": [] 00:15:29.923 }, 00:15:29.923 { 00:15:29.923 "name": "nvmf_tgt_poll_group_003", 00:15:29.923 "admin_qpairs": 0, 00:15:29.923 "io_qpairs": 0, 00:15:29.923 "current_admin_qpairs": 0, 00:15:29.923 "current_io_qpairs": 0, 00:15:29.923 "pending_bdev_io": 0, 00:15:29.923 "completed_nvme_io": 0, 00:15:29.923 "transports": [] 00:15:29.923 } 00:15:29.923 ] 00:15:29.923 }' 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.923 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.923 [2024-12-05 13:47:01.444271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.182 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.182 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:30.182 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.182 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.182 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.182 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:30.182 "tick_rate": 2700000000, 00:15:30.182 "poll_groups": [ 00:15:30.182 { 00:15:30.182 "name": "nvmf_tgt_poll_group_000", 00:15:30.182 "admin_qpairs": 0, 00:15:30.182 "io_qpairs": 0, 00:15:30.182 "current_admin_qpairs": 0, 00:15:30.182 "current_io_qpairs": 0, 00:15:30.182 "pending_bdev_io": 0, 00:15:30.182 "completed_nvme_io": 0, 00:15:30.182 "transports": [ 00:15:30.182 { 00:15:30.182 "trtype": "TCP" 00:15:30.182 } 00:15:30.182 ] 00:15:30.182 }, 00:15:30.182 { 00:15:30.182 "name": "nvmf_tgt_poll_group_001", 00:15:30.182 "admin_qpairs": 0, 00:15:30.182 "io_qpairs": 0, 00:15:30.182 "current_admin_qpairs": 0, 00:15:30.182 "current_io_qpairs": 0, 00:15:30.182 "pending_bdev_io": 0, 00:15:30.182 "completed_nvme_io": 0, 00:15:30.182 "transports": [ 00:15:30.182 { 00:15:30.182 "trtype": "TCP" 00:15:30.182 } 00:15:30.182 ] 00:15:30.182 }, 00:15:30.182 { 00:15:30.182 "name": "nvmf_tgt_poll_group_002", 00:15:30.182 "admin_qpairs": 0, 00:15:30.182 "io_qpairs": 0, 00:15:30.182 "current_admin_qpairs": 0, 00:15:30.182 "current_io_qpairs": 0, 00:15:30.182 "pending_bdev_io": 0, 00:15:30.182 "completed_nvme_io": 0, 00:15:30.182 "transports": [ 00:15:30.182 { 00:15:30.182 "trtype": "TCP" 00:15:30.182 } 00:15:30.182 ] 00:15:30.182 }, 00:15:30.182 { 00:15:30.182 "name": "nvmf_tgt_poll_group_003", 00:15:30.182 "admin_qpairs": 0, 00:15:30.182 "io_qpairs": 0, 00:15:30.182 "current_admin_qpairs": 0, 00:15:30.182 "current_io_qpairs": 0, 00:15:30.182 "pending_bdev_io": 0, 00:15:30.182 "completed_nvme_io": 0, 00:15:30.182 "transports": [ 00:15:30.182 { 00:15:30.182 "trtype": "TCP" 00:15:30.182 } 00:15:30.182 ] 00:15:30.182 } 00:15:30.182 ] 00:15:30.182 }' 00:15:30.182 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:30.182 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:30.182 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:30.182 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:30.182 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:30.182 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:30.182 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.183 Malloc1 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.183 [2024-12-05 13:47:01.593410] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:15:30.183 [2024-12-05 13:47:01.615976] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:15:30.183 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:30.183 could not add new controller: failed to write to nvme-fabrics device 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.183 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:30.748 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:30.748 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:30.748 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:30.748 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:30.748 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:33.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:33.276 [2024-12-05 13:47:04.356303] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:15:33.276 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:33.276 could not add new controller: failed to write to nvme-fabrics device 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.276 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:33.842 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:33.842 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:33.842 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:33.842 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:33.842 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:35.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.740 [2024-12-05 13:47:07.240106] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.740 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:36.673 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:36.673 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:36.673 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:36.673 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:36.673 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:38.570 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:38.570 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:38.570 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:38.570 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:38.570 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:38.570 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:38.570 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:38.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.570 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:38.570 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:38.570 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:38.570 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:38.570 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:38.570 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:38.570 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.571 [2024-12-05 13:47:10.077182] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.571 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.828 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.828 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:39.395 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:39.395 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:39.395 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:39.395 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:39.395 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:41.289 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:41.289 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:41.289 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:41.289 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:41.289 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:41.289 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:41.289 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:41.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.545 [2024-12-05 13:47:12.945032] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.545 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:42.110 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:42.110 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:42.110 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:42.110 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:42.110 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:44.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.630 [2024-12-05 13:47:15.728121] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.631 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.631 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:44.887 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:44.887 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:44.887 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:44.887 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:44.887 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:47.412 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.413 [2024-12-05 13:47:18.460040] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.413 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:47.671 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:47.671 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:47.671 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:47.671 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:47.671 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:49.570 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:49.570 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:49.570 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:49.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.828 [2024-12-05 13:47:21.249018] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.828 [2024-12-05 13:47:21.297059] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.828 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.829 [2024-12-05 13:47:21.345231] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.829 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 [2024-12-05 13:47:21.393395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.087 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 [2024-12-05 13:47:21.441584] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:50.088 "tick_rate": 2700000000, 00:15:50.088 "poll_groups": [ 00:15:50.088 { 00:15:50.088 "name": "nvmf_tgt_poll_group_000", 00:15:50.088 "admin_qpairs": 2, 00:15:50.088 "io_qpairs": 84, 00:15:50.088 "current_admin_qpairs": 0, 00:15:50.088 "current_io_qpairs": 0, 00:15:50.088 "pending_bdev_io": 0, 00:15:50.088 "completed_nvme_io": 207, 00:15:50.088 "transports": [ 00:15:50.088 { 00:15:50.088 "trtype": "TCP" 00:15:50.088 } 00:15:50.088 ] 00:15:50.088 }, 00:15:50.088 { 00:15:50.088 "name": "nvmf_tgt_poll_group_001", 00:15:50.088 "admin_qpairs": 2, 00:15:50.088 "io_qpairs": 84, 00:15:50.088 "current_admin_qpairs": 0, 00:15:50.088 "current_io_qpairs": 0, 00:15:50.088 "pending_bdev_io": 0, 00:15:50.088 "completed_nvme_io": 150, 00:15:50.088 "transports": [ 00:15:50.088 { 00:15:50.088 "trtype": "TCP" 00:15:50.088 } 00:15:50.088 ] 00:15:50.088 }, 00:15:50.088 { 00:15:50.088 "name": "nvmf_tgt_poll_group_002", 00:15:50.088 "admin_qpairs": 1, 00:15:50.088 "io_qpairs": 84, 00:15:50.088 "current_admin_qpairs": 0, 00:15:50.088 "current_io_qpairs": 0, 00:15:50.088 "pending_bdev_io": 0, 00:15:50.088 "completed_nvme_io": 162, 00:15:50.088 "transports": [ 00:15:50.088 { 00:15:50.088 "trtype": "TCP" 00:15:50.088 } 00:15:50.088 ] 00:15:50.088 }, 00:15:50.088 { 00:15:50.088 "name": "nvmf_tgt_poll_group_003", 00:15:50.088 "admin_qpairs": 2, 00:15:50.088 "io_qpairs": 84, 00:15:50.088 "current_admin_qpairs": 0, 00:15:50.088 "current_io_qpairs": 0, 00:15:50.088 "pending_bdev_io": 0, 00:15:50.088 "completed_nvme_io": 167, 00:15:50.088 "transports": [ 00:15:50.088 { 00:15:50.088 "trtype": "TCP" 00:15:50.088 } 00:15:50.088 ] 00:15:50.088 } 00:15:50.088 ] 00:15:50.088 }' 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:50.088 rmmod nvme_tcp 00:15:50.088 rmmod nvme_fabrics 00:15:50.088 rmmod nvme_keyring 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2198333 ']' 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2198333 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2198333 ']' 00:15:50.088 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2198333 00:15:50.347 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:15:50.347 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.347 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2198333 00:15:50.347 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.347 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.347 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2198333' 00:15:50.347 killing process with pid 2198333 00:15:50.347 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2198333 00:15:50.347 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2198333 00:15:50.606 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:50.606 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:50.606 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:50.606 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:15:50.606 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:15:50.606 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:50.606 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:15:50.606 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:50.606 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:50.606 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.606 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:50.606 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.512 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:52.512 00:15:52.512 real 0m25.432s 00:15:52.512 user 1m22.155s 00:15:52.512 sys 0m4.257s 00:15:52.512 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.512 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.512 ************************************ 00:15:52.512 END TEST nvmf_rpc 00:15:52.512 ************************************ 00:15:52.512 13:47:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:52.512 13:47:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:52.512 13:47:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.512 13:47:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:52.512 ************************************ 00:15:52.512 START TEST nvmf_invalid 00:15:52.512 ************************************ 00:15:52.512 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:52.772 * Looking for test storage... 00:15:52.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:52.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.772 --rc genhtml_branch_coverage=1 00:15:52.772 --rc genhtml_function_coverage=1 00:15:52.772 --rc genhtml_legend=1 00:15:52.772 --rc geninfo_all_blocks=1 00:15:52.772 --rc geninfo_unexecuted_blocks=1 00:15:52.772 00:15:52.772 ' 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:52.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.772 --rc genhtml_branch_coverage=1 00:15:52.772 --rc genhtml_function_coverage=1 00:15:52.772 --rc genhtml_legend=1 00:15:52.772 --rc geninfo_all_blocks=1 00:15:52.772 --rc geninfo_unexecuted_blocks=1 00:15:52.772 00:15:52.772 ' 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:52.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.772 --rc genhtml_branch_coverage=1 00:15:52.772 --rc genhtml_function_coverage=1 00:15:52.772 --rc genhtml_legend=1 00:15:52.772 --rc geninfo_all_blocks=1 00:15:52.772 --rc geninfo_unexecuted_blocks=1 00:15:52.772 00:15:52.772 ' 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:52.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.772 --rc genhtml_branch_coverage=1 00:15:52.772 --rc genhtml_function_coverage=1 00:15:52.772 --rc genhtml_legend=1 00:15:52.772 --rc geninfo_all_blocks=1 00:15:52.772 --rc geninfo_unexecuted_blocks=1 00:15:52.772 00:15:52.772 ' 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:52.772 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:55.385 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:55.385 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:55.385 Found net devices under 0000:09:00.0: cvl_0_0 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.385 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:55.386 Found net devices under 0000:09:00.1: cvl_0_1 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:55.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:15:55.386 00:15:55.386 --- 10.0.0.2 ping statistics --- 00:15:55.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.386 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:15:55.386 00:15:55.386 --- 10.0.0.1 ping statistics --- 00:15:55.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.386 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2202960 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2202960 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2202960 ']' 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:55.386 [2024-12-05 13:47:26.498987] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:15:55.386 [2024-12-05 13:47:26.499068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.386 [2024-12-05 13:47:26.566647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:55.386 [2024-12-05 13:47:26.618349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.386 [2024-12-05 13:47:26.618403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.386 [2024-12-05 13:47:26.618438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.386 [2024-12-05 13:47:26.618450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.386 [2024-12-05 13:47:26.618459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.386 [2024-12-05 13:47:26.620105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.386 [2024-12-05 13:47:26.620168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.386 [2024-12-05 13:47:26.620214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.386 [2024-12-05 13:47:26.620217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:55.386 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12453 00:15:55.643 [2024-12-05 13:47:27.040015] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:55.643 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:55.643 { 00:15:55.643 "nqn": "nqn.2016-06.io.spdk:cnode12453", 00:15:55.643 "tgt_name": "foobar", 00:15:55.643 "method": "nvmf_create_subsystem", 00:15:55.643 "req_id": 1 00:15:55.643 } 00:15:55.643 Got JSON-RPC error response 00:15:55.643 response: 00:15:55.643 { 00:15:55.643 "code": -32603, 00:15:55.643 "message": "Unable to find target foobar" 00:15:55.643 }' 00:15:55.643 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:55.643 { 00:15:55.643 "nqn": "nqn.2016-06.io.spdk:cnode12453", 00:15:55.643 "tgt_name": "foobar", 00:15:55.643 "method": "nvmf_create_subsystem", 00:15:55.643 "req_id": 1 00:15:55.643 } 00:15:55.643 Got JSON-RPC error response 00:15:55.643 response: 00:15:55.643 { 00:15:55.643 "code": -32603, 00:15:55.643 "message": "Unable to find target foobar" 00:15:55.643 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:55.643 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:55.643 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5856 00:15:55.900 [2024-12-05 13:47:27.312956] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5856: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:55.900 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:55.900 { 00:15:55.900 "nqn": "nqn.2016-06.io.spdk:cnode5856", 00:15:55.900 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:55.900 "method": "nvmf_create_subsystem", 00:15:55.900 "req_id": 1 00:15:55.900 } 00:15:55.900 Got JSON-RPC error response 00:15:55.900 response: 00:15:55.900 { 00:15:55.900 "code": -32602, 00:15:55.900 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:55.900 }' 00:15:55.900 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:55.900 { 00:15:55.900 "nqn": "nqn.2016-06.io.spdk:cnode5856", 00:15:55.900 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:55.900 "method": "nvmf_create_subsystem", 00:15:55.900 "req_id": 1 00:15:55.900 } 00:15:55.900 Got JSON-RPC error response 00:15:55.900 response: 00:15:55.900 { 00:15:55.900 "code": -32602, 00:15:55.900 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:55.900 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:55.900 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:55.900 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23188 00:15:56.157 [2024-12-05 13:47:27.597891] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23188: invalid model number 'SPDK_Controller' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:56.157 { 00:15:56.157 "nqn": "nqn.2016-06.io.spdk:cnode23188", 00:15:56.157 "model_number": "SPDK_Controller\u001f", 00:15:56.157 "method": "nvmf_create_subsystem", 00:15:56.157 "req_id": 1 00:15:56.157 } 00:15:56.157 Got JSON-RPC error response 00:15:56.157 response: 00:15:56.157 { 00:15:56.157 "code": -32602, 00:15:56.157 "message": "Invalid MN SPDK_Controller\u001f" 00:15:56.157 }' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:56.157 { 00:15:56.157 "nqn": "nqn.2016-06.io.spdk:cnode23188", 00:15:56.157 "model_number": "SPDK_Controller\u001f", 00:15:56.157 "method": "nvmf_create_subsystem", 00:15:56.157 "req_id": 1 00:15:56.157 } 00:15:56.157 Got JSON-RPC error response 00:15:56.157 response: 00:15:56.157 { 00:15:56.157 "code": -32602, 00:15:56.157 "message": "Invalid MN SPDK_Controller\u001f" 00:15:56.157 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.157 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:15:56.416 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\!Tt=z.Jr^+_ /dev/null' 00:15:59.759 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.665 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:01.665 00:16:01.665 real 0m9.113s 00:16:01.665 user 0m21.819s 00:16:01.665 sys 0m2.569s 00:16:01.665 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.665 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:01.665 ************************************ 00:16:01.665 END TEST nvmf_invalid 00:16:01.665 ************************************ 00:16:01.665 13:47:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:01.665 13:47:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:01.665 13:47:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.665 13:47:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:01.665 ************************************ 00:16:01.665 START TEST nvmf_connect_stress 00:16:01.665 ************************************ 00:16:01.665 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:01.923 * Looking for test storage... 00:16:01.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.923 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:01.923 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:16:01.923 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:01.923 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:01.923 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:01.923 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:01.923 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:01.923 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:01.923 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:01.923 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:01.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.924 --rc genhtml_branch_coverage=1 00:16:01.924 --rc genhtml_function_coverage=1 00:16:01.924 --rc genhtml_legend=1 00:16:01.924 --rc geninfo_all_blocks=1 00:16:01.924 --rc geninfo_unexecuted_blocks=1 00:16:01.924 00:16:01.924 ' 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:01.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.924 --rc genhtml_branch_coverage=1 00:16:01.924 --rc genhtml_function_coverage=1 00:16:01.924 --rc genhtml_legend=1 00:16:01.924 --rc geninfo_all_blocks=1 00:16:01.924 --rc geninfo_unexecuted_blocks=1 00:16:01.924 00:16:01.924 ' 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:01.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.924 --rc genhtml_branch_coverage=1 00:16:01.924 --rc genhtml_function_coverage=1 00:16:01.924 --rc genhtml_legend=1 00:16:01.924 --rc geninfo_all_blocks=1 00:16:01.924 --rc geninfo_unexecuted_blocks=1 00:16:01.924 00:16:01.924 ' 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:01.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.924 --rc genhtml_branch_coverage=1 00:16:01.924 --rc genhtml_function_coverage=1 00:16:01.924 --rc genhtml_legend=1 00:16:01.924 --rc geninfo_all_blocks=1 00:16:01.924 --rc geninfo_unexecuted_blocks=1 00:16:01.924 00:16:01.924 ' 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:01.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:01.924 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.925 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.925 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.925 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:01.925 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:01.925 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:01.925 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:04.459 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:04.459 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:04.459 Found net devices under 0000:09:00.0: cvl_0_0 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:04.459 Found net devices under 0000:09:00.1: cvl_0_1 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.459 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:04.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:16:04.460 00:16:04.460 --- 10.0.0.2 ping statistics --- 00:16:04.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.460 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:04.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:16:04.460 00:16:04.460 --- 10.0.0.1 ping statistics --- 00:16:04.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.460 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2205607 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2205607 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2205607 ']' 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.460 [2024-12-05 13:47:35.722611] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:16:04.460 [2024-12-05 13:47:35.722706] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.460 [2024-12-05 13:47:35.793537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:04.460 [2024-12-05 13:47:35.850893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.460 [2024-12-05 13:47:35.850948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.460 [2024-12-05 13:47:35.850976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.460 [2024-12-05 13:47:35.850987] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.460 [2024-12-05 13:47:35.850997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.460 [2024-12-05 13:47:35.852612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.460 [2024-12-05 13:47:35.852664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.460 [2024-12-05 13:47:35.852668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:04.460 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.718 [2024-12-05 13:47:36.010295] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.718 [2024-12-05 13:47:36.027344] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.718 NULL1 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2205630 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.718 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.719 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.976 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.976 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:04.976 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.976 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.976 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.235 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.235 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:05.235 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.235 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.235 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.800 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.800 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:05.800 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.801 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.801 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.058 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.059 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:06.059 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.059 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.059 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.316 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.316 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:06.316 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.316 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.316 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.574 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.574 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:06.574 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.574 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.575 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.833 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.833 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:06.833 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.833 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.833 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:07.399 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.399 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:07.399 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:07.399 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.399 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:07.656 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.656 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:07.656 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:07.656 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.656 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:07.913 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.913 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:07.913 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:07.913 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.913 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.172 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.172 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:08.172 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.172 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.172 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.430 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.430 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:08.430 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.430 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.430 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.997 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.997 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:08.997 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.997 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.997 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.255 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.255 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:09.255 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.255 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.255 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.513 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.513 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:09.513 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.513 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.513 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.771 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.771 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:09.771 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.771 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.771 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.031 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.031 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:10.031 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.031 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.031 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.597 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.597 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:10.597 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.597 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.597 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.854 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.854 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:10.854 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.854 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.854 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.112 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.113 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:11.113 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.113 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.113 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.370 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.370 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:11.371 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.371 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.371 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.935 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.936 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:11.936 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.936 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.936 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.197 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.197 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:12.197 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.197 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.197 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.457 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.457 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:12.457 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.457 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.457 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.714 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.714 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:12.714 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.714 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.714 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.972 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.972 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:12.972 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.972 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.972 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.536 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.536 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:13.537 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.537 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.537 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.793 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.793 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:13.793 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.793 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.793 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.050 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.050 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:14.050 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.050 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.050 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.309 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.309 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:14.309 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.309 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.309 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.596 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.596 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:14.596 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.596 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.596 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.853 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:14.853 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.853 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2205630 00:16:14.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2205630) - No such process 00:16:14.853 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2205630 00:16:14.854 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:14.854 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:14.854 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:14.854 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:14.854 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:14.854 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:14.854 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:14.854 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:14.854 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:15.111 rmmod nvme_tcp 00:16:15.111 rmmod nvme_fabrics 00:16:15.111 rmmod nvme_keyring 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2205607 ']' 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2205607 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2205607 ']' 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2205607 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2205607 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2205607' 00:16:15.111 killing process with pid 2205607 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2205607 00:16:15.111 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2205607 00:16:15.370 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:15.370 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:15.370 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:15.370 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:15.370 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:15.370 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:15.370 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:15.370 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:15.370 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:15.370 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.370 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:15.370 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.274 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:17.274 00:16:17.274 real 0m15.577s 00:16:17.274 user 0m38.793s 00:16:17.274 sys 0m5.893s 00:16:17.274 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.274 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.274 ************************************ 00:16:17.274 END TEST nvmf_connect_stress 00:16:17.274 ************************************ 00:16:17.274 13:47:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:17.274 13:47:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:17.274 13:47:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.274 13:47:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:17.274 ************************************ 00:16:17.274 START TEST nvmf_fused_ordering 00:16:17.274 ************************************ 00:16:17.274 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:17.533 * Looking for test storage... 00:16:17.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:17.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.533 --rc genhtml_branch_coverage=1 00:16:17.533 --rc genhtml_function_coverage=1 00:16:17.533 --rc genhtml_legend=1 00:16:17.533 --rc geninfo_all_blocks=1 00:16:17.533 --rc geninfo_unexecuted_blocks=1 00:16:17.533 00:16:17.533 ' 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:17.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.533 --rc genhtml_branch_coverage=1 00:16:17.533 --rc genhtml_function_coverage=1 00:16:17.533 --rc genhtml_legend=1 00:16:17.533 --rc geninfo_all_blocks=1 00:16:17.533 --rc geninfo_unexecuted_blocks=1 00:16:17.533 00:16:17.533 ' 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:17.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.533 --rc genhtml_branch_coverage=1 00:16:17.533 --rc genhtml_function_coverage=1 00:16:17.533 --rc genhtml_legend=1 00:16:17.533 --rc geninfo_all_blocks=1 00:16:17.533 --rc geninfo_unexecuted_blocks=1 00:16:17.533 00:16:17.533 ' 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:17.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.533 --rc genhtml_branch_coverage=1 00:16:17.533 --rc genhtml_function_coverage=1 00:16:17.533 --rc genhtml_legend=1 00:16:17.533 --rc geninfo_all_blocks=1 00:16:17.533 --rc geninfo_unexecuted_blocks=1 00:16:17.533 00:16:17.533 ' 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.533 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:17.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:17.534 13:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:20.066 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:20.066 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:20.066 Found net devices under 0000:09:00.0: cvl_0_0 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:20.066 Found net devices under 0000:09:00.1: cvl_0_1 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:20.066 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:20.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:16:20.067 00:16:20.067 --- 10.0.0.2 ping statistics --- 00:16:20.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.067 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:16:20.067 00:16:20.067 --- 10.0.0.1 ping statistics --- 00:16:20.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.067 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2208788 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2208788 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2208788 ']' 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:20.067 [2024-12-05 13:47:51.299066] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:16:20.067 [2024-12-05 13:47:51.299149] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.067 [2024-12-05 13:47:51.371003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.067 [2024-12-05 13:47:51.426554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.067 [2024-12-05 13:47:51.426604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.067 [2024-12-05 13:47:51.426633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.067 [2024-12-05 13:47:51.426644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.067 [2024-12-05 13:47:51.426655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.067 [2024-12-05 13:47:51.427238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:20.067 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:20.326 [2024-12-05 13:47:51.596534] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:20.326 [2024-12-05 13:47:51.612728] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:20.326 NULL1 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.326 13:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:20.326 [2024-12-05 13:47:51.655234] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:16:20.326 [2024-12-05 13:47:51.655268] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2208926 ] 00:16:20.585 Attached to nqn.2016-06.io.spdk:cnode1 00:16:20.585 Namespace ID: 1 size: 1GB 00:16:20.585 fused_ordering(0) 00:16:20.585 fused_ordering(1) 00:16:20.585 fused_ordering(2) 00:16:20.585 fused_ordering(3) 00:16:20.585 fused_ordering(4) 00:16:20.585 fused_ordering(5) 00:16:20.585 fused_ordering(6) 00:16:20.585 fused_ordering(7) 00:16:20.585 fused_ordering(8) 00:16:20.585 fused_ordering(9) 00:16:20.585 fused_ordering(10) 00:16:20.585 fused_ordering(11) 00:16:20.585 fused_ordering(12) 00:16:20.585 fused_ordering(13) 00:16:20.585 fused_ordering(14) 00:16:20.585 fused_ordering(15) 00:16:20.585 fused_ordering(16) 00:16:20.585 fused_ordering(17) 00:16:20.585 fused_ordering(18) 00:16:20.585 fused_ordering(19) 00:16:20.585 fused_ordering(20) 00:16:20.585 fused_ordering(21) 00:16:20.585 fused_ordering(22) 00:16:20.585 fused_ordering(23) 00:16:20.585 fused_ordering(24) 00:16:20.585 fused_ordering(25) 00:16:20.585 fused_ordering(26) 00:16:20.585 fused_ordering(27) 00:16:20.585 fused_ordering(28) 00:16:20.585 fused_ordering(29) 00:16:20.585 fused_ordering(30) 00:16:20.585 fused_ordering(31) 00:16:20.585 fused_ordering(32) 00:16:20.585 fused_ordering(33) 00:16:20.585 fused_ordering(34) 00:16:20.585 fused_ordering(35) 00:16:20.585 fused_ordering(36) 00:16:20.585 fused_ordering(37) 00:16:20.585 fused_ordering(38) 00:16:20.585 fused_ordering(39) 00:16:20.585 fused_ordering(40) 00:16:20.585 fused_ordering(41) 00:16:20.585 fused_ordering(42) 00:16:20.585 fused_ordering(43) 00:16:20.585 fused_ordering(44) 00:16:20.585 fused_ordering(45) 00:16:20.585 fused_ordering(46) 00:16:20.585 fused_ordering(47) 00:16:20.585 fused_ordering(48) 00:16:20.585 fused_ordering(49) 00:16:20.585 fused_ordering(50) 00:16:20.585 fused_ordering(51) 00:16:20.585 fused_ordering(52) 00:16:20.585 fused_ordering(53) 00:16:20.585 fused_ordering(54) 00:16:20.585 fused_ordering(55) 00:16:20.585 fused_ordering(56) 00:16:20.585 fused_ordering(57) 00:16:20.585 fused_ordering(58) 00:16:20.585 fused_ordering(59) 00:16:20.585 fused_ordering(60) 00:16:20.585 fused_ordering(61) 00:16:20.585 fused_ordering(62) 00:16:20.585 fused_ordering(63) 00:16:20.585 fused_ordering(64) 00:16:20.585 fused_ordering(65) 00:16:20.585 fused_ordering(66) 00:16:20.585 fused_ordering(67) 00:16:20.585 fused_ordering(68) 00:16:20.585 fused_ordering(69) 00:16:20.585 fused_ordering(70) 00:16:20.585 fused_ordering(71) 00:16:20.585 fused_ordering(72) 00:16:20.585 fused_ordering(73) 00:16:20.585 fused_ordering(74) 00:16:20.585 fused_ordering(75) 00:16:20.585 fused_ordering(76) 00:16:20.585 fused_ordering(77) 00:16:20.585 fused_ordering(78) 00:16:20.585 fused_ordering(79) 00:16:20.585 fused_ordering(80) 00:16:20.585 fused_ordering(81) 00:16:20.585 fused_ordering(82) 00:16:20.585 fused_ordering(83) 00:16:20.585 fused_ordering(84) 00:16:20.585 fused_ordering(85) 00:16:20.585 fused_ordering(86) 00:16:20.585 fused_ordering(87) 00:16:20.585 fused_ordering(88) 00:16:20.585 fused_ordering(89) 00:16:20.585 fused_ordering(90) 00:16:20.585 fused_ordering(91) 00:16:20.585 fused_ordering(92) 00:16:20.585 fused_ordering(93) 00:16:20.585 fused_ordering(94) 00:16:20.585 fused_ordering(95) 00:16:20.585 fused_ordering(96) 00:16:20.585 fused_ordering(97) 00:16:20.585 fused_ordering(98) 00:16:20.585 fused_ordering(99) 00:16:20.585 fused_ordering(100) 00:16:20.585 fused_ordering(101) 00:16:20.585 fused_ordering(102) 00:16:20.585 fused_ordering(103) 00:16:20.585 fused_ordering(104) 00:16:20.585 fused_ordering(105) 00:16:20.585 fused_ordering(106) 00:16:20.585 fused_ordering(107) 00:16:20.585 fused_ordering(108) 00:16:20.585 fused_ordering(109) 00:16:20.585 fused_ordering(110) 00:16:20.585 fused_ordering(111) 00:16:20.585 fused_ordering(112) 00:16:20.585 fused_ordering(113) 00:16:20.585 fused_ordering(114) 00:16:20.585 fused_ordering(115) 00:16:20.585 fused_ordering(116) 00:16:20.585 fused_ordering(117) 00:16:20.585 fused_ordering(118) 00:16:20.585 fused_ordering(119) 00:16:20.585 fused_ordering(120) 00:16:20.585 fused_ordering(121) 00:16:20.585 fused_ordering(122) 00:16:20.585 fused_ordering(123) 00:16:20.585 fused_ordering(124) 00:16:20.585 fused_ordering(125) 00:16:20.585 fused_ordering(126) 00:16:20.585 fused_ordering(127) 00:16:20.585 fused_ordering(128) 00:16:20.585 fused_ordering(129) 00:16:20.585 fused_ordering(130) 00:16:20.585 fused_ordering(131) 00:16:20.585 fused_ordering(132) 00:16:20.585 fused_ordering(133) 00:16:20.585 fused_ordering(134) 00:16:20.585 fused_ordering(135) 00:16:20.585 fused_ordering(136) 00:16:20.585 fused_ordering(137) 00:16:20.585 fused_ordering(138) 00:16:20.585 fused_ordering(139) 00:16:20.585 fused_ordering(140) 00:16:20.585 fused_ordering(141) 00:16:20.585 fused_ordering(142) 00:16:20.585 fused_ordering(143) 00:16:20.585 fused_ordering(144) 00:16:20.585 fused_ordering(145) 00:16:20.585 fused_ordering(146) 00:16:20.585 fused_ordering(147) 00:16:20.585 fused_ordering(148) 00:16:20.585 fused_ordering(149) 00:16:20.585 fused_ordering(150) 00:16:20.585 fused_ordering(151) 00:16:20.585 fused_ordering(152) 00:16:20.585 fused_ordering(153) 00:16:20.585 fused_ordering(154) 00:16:20.585 fused_ordering(155) 00:16:20.585 fused_ordering(156) 00:16:20.585 fused_ordering(157) 00:16:20.585 fused_ordering(158) 00:16:20.585 fused_ordering(159) 00:16:20.585 fused_ordering(160) 00:16:20.585 fused_ordering(161) 00:16:20.585 fused_ordering(162) 00:16:20.585 fused_ordering(163) 00:16:20.585 fused_ordering(164) 00:16:20.585 fused_ordering(165) 00:16:20.585 fused_ordering(166) 00:16:20.585 fused_ordering(167) 00:16:20.585 fused_ordering(168) 00:16:20.585 fused_ordering(169) 00:16:20.585 fused_ordering(170) 00:16:20.585 fused_ordering(171) 00:16:20.585 fused_ordering(172) 00:16:20.585 fused_ordering(173) 00:16:20.585 fused_ordering(174) 00:16:20.585 fused_ordering(175) 00:16:20.585 fused_ordering(176) 00:16:20.585 fused_ordering(177) 00:16:20.585 fused_ordering(178) 00:16:20.585 fused_ordering(179) 00:16:20.585 fused_ordering(180) 00:16:20.585 fused_ordering(181) 00:16:20.585 fused_ordering(182) 00:16:20.585 fused_ordering(183) 00:16:20.585 fused_ordering(184) 00:16:20.585 fused_ordering(185) 00:16:20.585 fused_ordering(186) 00:16:20.585 fused_ordering(187) 00:16:20.585 fused_ordering(188) 00:16:20.585 fused_ordering(189) 00:16:20.585 fused_ordering(190) 00:16:20.585 fused_ordering(191) 00:16:20.585 fused_ordering(192) 00:16:20.585 fused_ordering(193) 00:16:20.585 fused_ordering(194) 00:16:20.585 fused_ordering(195) 00:16:20.585 fused_ordering(196) 00:16:20.585 fused_ordering(197) 00:16:20.585 fused_ordering(198) 00:16:20.585 fused_ordering(199) 00:16:20.585 fused_ordering(200) 00:16:20.585 fused_ordering(201) 00:16:20.585 fused_ordering(202) 00:16:20.585 fused_ordering(203) 00:16:20.585 fused_ordering(204) 00:16:20.585 fused_ordering(205) 00:16:20.843 fused_ordering(206) 00:16:20.843 fused_ordering(207) 00:16:20.843 fused_ordering(208) 00:16:20.843 fused_ordering(209) 00:16:20.843 fused_ordering(210) 00:16:20.843 fused_ordering(211) 00:16:20.843 fused_ordering(212) 00:16:20.843 fused_ordering(213) 00:16:20.843 fused_ordering(214) 00:16:20.843 fused_ordering(215) 00:16:20.843 fused_ordering(216) 00:16:20.843 fused_ordering(217) 00:16:20.843 fused_ordering(218) 00:16:20.843 fused_ordering(219) 00:16:20.844 fused_ordering(220) 00:16:20.844 fused_ordering(221) 00:16:20.844 fused_ordering(222) 00:16:20.844 fused_ordering(223) 00:16:20.844 fused_ordering(224) 00:16:20.844 fused_ordering(225) 00:16:20.844 fused_ordering(226) 00:16:20.844 fused_ordering(227) 00:16:20.844 fused_ordering(228) 00:16:20.844 fused_ordering(229) 00:16:20.844 fused_ordering(230) 00:16:20.844 fused_ordering(231) 00:16:20.844 fused_ordering(232) 00:16:20.844 fused_ordering(233) 00:16:20.844 fused_ordering(234) 00:16:20.844 fused_ordering(235) 00:16:20.844 fused_ordering(236) 00:16:20.844 fused_ordering(237) 00:16:20.844 fused_ordering(238) 00:16:20.844 fused_ordering(239) 00:16:20.844 fused_ordering(240) 00:16:20.844 fused_ordering(241) 00:16:20.844 fused_ordering(242) 00:16:20.844 fused_ordering(243) 00:16:20.844 fused_ordering(244) 00:16:20.844 fused_ordering(245) 00:16:20.844 fused_ordering(246) 00:16:20.844 fused_ordering(247) 00:16:20.844 fused_ordering(248) 00:16:20.844 fused_ordering(249) 00:16:20.844 fused_ordering(250) 00:16:20.844 fused_ordering(251) 00:16:20.844 fused_ordering(252) 00:16:20.844 fused_ordering(253) 00:16:20.844 fused_ordering(254) 00:16:20.844 fused_ordering(255) 00:16:20.844 fused_ordering(256) 00:16:20.844 fused_ordering(257) 00:16:20.844 fused_ordering(258) 00:16:20.844 fused_ordering(259) 00:16:20.844 fused_ordering(260) 00:16:20.844 fused_ordering(261) 00:16:20.844 fused_ordering(262) 00:16:20.844 fused_ordering(263) 00:16:20.844 fused_ordering(264) 00:16:20.844 fused_ordering(265) 00:16:20.844 fused_ordering(266) 00:16:20.844 fused_ordering(267) 00:16:20.844 fused_ordering(268) 00:16:20.844 fused_ordering(269) 00:16:20.844 fused_ordering(270) 00:16:20.844 fused_ordering(271) 00:16:20.844 fused_ordering(272) 00:16:20.844 fused_ordering(273) 00:16:20.844 fused_ordering(274) 00:16:20.844 fused_ordering(275) 00:16:20.844 fused_ordering(276) 00:16:20.844 fused_ordering(277) 00:16:20.844 fused_ordering(278) 00:16:20.844 fused_ordering(279) 00:16:20.844 fused_ordering(280) 00:16:20.844 fused_ordering(281) 00:16:20.844 fused_ordering(282) 00:16:20.844 fused_ordering(283) 00:16:20.844 fused_ordering(284) 00:16:20.844 fused_ordering(285) 00:16:20.844 fused_ordering(286) 00:16:20.844 fused_ordering(287) 00:16:20.844 fused_ordering(288) 00:16:20.844 fused_ordering(289) 00:16:20.844 fused_ordering(290) 00:16:20.844 fused_ordering(291) 00:16:20.844 fused_ordering(292) 00:16:20.844 fused_ordering(293) 00:16:20.844 fused_ordering(294) 00:16:20.844 fused_ordering(295) 00:16:20.844 fused_ordering(296) 00:16:20.844 fused_ordering(297) 00:16:20.844 fused_ordering(298) 00:16:20.844 fused_ordering(299) 00:16:20.844 fused_ordering(300) 00:16:20.844 fused_ordering(301) 00:16:20.844 fused_ordering(302) 00:16:20.844 fused_ordering(303) 00:16:20.844 fused_ordering(304) 00:16:20.844 fused_ordering(305) 00:16:20.844 fused_ordering(306) 00:16:20.844 fused_ordering(307) 00:16:20.844 fused_ordering(308) 00:16:20.844 fused_ordering(309) 00:16:20.844 fused_ordering(310) 00:16:20.844 fused_ordering(311) 00:16:20.844 fused_ordering(312) 00:16:20.844 fused_ordering(313) 00:16:20.844 fused_ordering(314) 00:16:20.844 fused_ordering(315) 00:16:20.844 fused_ordering(316) 00:16:20.844 fused_ordering(317) 00:16:20.844 fused_ordering(318) 00:16:20.844 fused_ordering(319) 00:16:20.844 fused_ordering(320) 00:16:20.844 fused_ordering(321) 00:16:20.844 fused_ordering(322) 00:16:20.844 fused_ordering(323) 00:16:20.844 fused_ordering(324) 00:16:20.844 fused_ordering(325) 00:16:20.844 fused_ordering(326) 00:16:20.844 fused_ordering(327) 00:16:20.844 fused_ordering(328) 00:16:20.844 fused_ordering(329) 00:16:20.844 fused_ordering(330) 00:16:20.844 fused_ordering(331) 00:16:20.844 fused_ordering(332) 00:16:20.844 fused_ordering(333) 00:16:20.844 fused_ordering(334) 00:16:20.844 fused_ordering(335) 00:16:20.844 fused_ordering(336) 00:16:20.844 fused_ordering(337) 00:16:20.844 fused_ordering(338) 00:16:20.844 fused_ordering(339) 00:16:20.844 fused_ordering(340) 00:16:20.844 fused_ordering(341) 00:16:20.844 fused_ordering(342) 00:16:20.844 fused_ordering(343) 00:16:20.844 fused_ordering(344) 00:16:20.844 fused_ordering(345) 00:16:20.844 fused_ordering(346) 00:16:20.844 fused_ordering(347) 00:16:20.844 fused_ordering(348) 00:16:20.844 fused_ordering(349) 00:16:20.844 fused_ordering(350) 00:16:20.844 fused_ordering(351) 00:16:20.844 fused_ordering(352) 00:16:20.844 fused_ordering(353) 00:16:20.844 fused_ordering(354) 00:16:20.844 fused_ordering(355) 00:16:20.844 fused_ordering(356) 00:16:20.844 fused_ordering(357) 00:16:20.844 fused_ordering(358) 00:16:20.844 fused_ordering(359) 00:16:20.844 fused_ordering(360) 00:16:20.844 fused_ordering(361) 00:16:20.844 fused_ordering(362) 00:16:20.844 fused_ordering(363) 00:16:20.844 fused_ordering(364) 00:16:20.844 fused_ordering(365) 00:16:20.844 fused_ordering(366) 00:16:20.844 fused_ordering(367) 00:16:20.844 fused_ordering(368) 00:16:20.844 fused_ordering(369) 00:16:20.844 fused_ordering(370) 00:16:20.844 fused_ordering(371) 00:16:20.844 fused_ordering(372) 00:16:20.844 fused_ordering(373) 00:16:20.844 fused_ordering(374) 00:16:20.844 fused_ordering(375) 00:16:20.844 fused_ordering(376) 00:16:20.844 fused_ordering(377) 00:16:20.844 fused_ordering(378) 00:16:20.844 fused_ordering(379) 00:16:20.844 fused_ordering(380) 00:16:20.844 fused_ordering(381) 00:16:20.844 fused_ordering(382) 00:16:20.844 fused_ordering(383) 00:16:20.844 fused_ordering(384) 00:16:20.844 fused_ordering(385) 00:16:20.844 fused_ordering(386) 00:16:20.844 fused_ordering(387) 00:16:20.844 fused_ordering(388) 00:16:20.844 fused_ordering(389) 00:16:20.844 fused_ordering(390) 00:16:20.844 fused_ordering(391) 00:16:20.844 fused_ordering(392) 00:16:20.844 fused_ordering(393) 00:16:20.844 fused_ordering(394) 00:16:20.844 fused_ordering(395) 00:16:20.844 fused_ordering(396) 00:16:20.844 fused_ordering(397) 00:16:20.844 fused_ordering(398) 00:16:20.844 fused_ordering(399) 00:16:20.844 fused_ordering(400) 00:16:20.844 fused_ordering(401) 00:16:20.844 fused_ordering(402) 00:16:20.844 fused_ordering(403) 00:16:20.844 fused_ordering(404) 00:16:20.844 fused_ordering(405) 00:16:20.844 fused_ordering(406) 00:16:20.844 fused_ordering(407) 00:16:20.844 fused_ordering(408) 00:16:20.844 fused_ordering(409) 00:16:20.844 fused_ordering(410) 00:16:21.410 fused_ordering(411) 00:16:21.410 fused_ordering(412) 00:16:21.410 fused_ordering(413) 00:16:21.410 fused_ordering(414) 00:16:21.410 fused_ordering(415) 00:16:21.410 fused_ordering(416) 00:16:21.410 fused_ordering(417) 00:16:21.410 fused_ordering(418) 00:16:21.410 fused_ordering(419) 00:16:21.410 fused_ordering(420) 00:16:21.410 fused_ordering(421) 00:16:21.410 fused_ordering(422) 00:16:21.410 fused_ordering(423) 00:16:21.410 fused_ordering(424) 00:16:21.410 fused_ordering(425) 00:16:21.410 fused_ordering(426) 00:16:21.410 fused_ordering(427) 00:16:21.410 fused_ordering(428) 00:16:21.410 fused_ordering(429) 00:16:21.410 fused_ordering(430) 00:16:21.410 fused_ordering(431) 00:16:21.410 fused_ordering(432) 00:16:21.410 fused_ordering(433) 00:16:21.410 fused_ordering(434) 00:16:21.410 fused_ordering(435) 00:16:21.410 fused_ordering(436) 00:16:21.410 fused_ordering(437) 00:16:21.410 fused_ordering(438) 00:16:21.410 fused_ordering(439) 00:16:21.410 fused_ordering(440) 00:16:21.410 fused_ordering(441) 00:16:21.410 fused_ordering(442) 00:16:21.410 fused_ordering(443) 00:16:21.410 fused_ordering(444) 00:16:21.410 fused_ordering(445) 00:16:21.410 fused_ordering(446) 00:16:21.410 fused_ordering(447) 00:16:21.410 fused_ordering(448) 00:16:21.410 fused_ordering(449) 00:16:21.410 fused_ordering(450) 00:16:21.410 fused_ordering(451) 00:16:21.410 fused_ordering(452) 00:16:21.410 fused_ordering(453) 00:16:21.410 fused_ordering(454) 00:16:21.410 fused_ordering(455) 00:16:21.410 fused_ordering(456) 00:16:21.410 fused_ordering(457) 00:16:21.410 fused_ordering(458) 00:16:21.410 fused_ordering(459) 00:16:21.410 fused_ordering(460) 00:16:21.410 fused_ordering(461) 00:16:21.410 fused_ordering(462) 00:16:21.410 fused_ordering(463) 00:16:21.410 fused_ordering(464) 00:16:21.410 fused_ordering(465) 00:16:21.410 fused_ordering(466) 00:16:21.410 fused_ordering(467) 00:16:21.410 fused_ordering(468) 00:16:21.410 fused_ordering(469) 00:16:21.410 fused_ordering(470) 00:16:21.410 fused_ordering(471) 00:16:21.410 fused_ordering(472) 00:16:21.410 fused_ordering(473) 00:16:21.410 fused_ordering(474) 00:16:21.410 fused_ordering(475) 00:16:21.410 fused_ordering(476) 00:16:21.410 fused_ordering(477) 00:16:21.410 fused_ordering(478) 00:16:21.410 fused_ordering(479) 00:16:21.410 fused_ordering(480) 00:16:21.410 fused_ordering(481) 00:16:21.410 fused_ordering(482) 00:16:21.410 fused_ordering(483) 00:16:21.410 fused_ordering(484) 00:16:21.410 fused_ordering(485) 00:16:21.410 fused_ordering(486) 00:16:21.410 fused_ordering(487) 00:16:21.410 fused_ordering(488) 00:16:21.410 fused_ordering(489) 00:16:21.410 fused_ordering(490) 00:16:21.410 fused_ordering(491) 00:16:21.410 fused_ordering(492) 00:16:21.410 fused_ordering(493) 00:16:21.410 fused_ordering(494) 00:16:21.410 fused_ordering(495) 00:16:21.410 fused_ordering(496) 00:16:21.410 fused_ordering(497) 00:16:21.410 fused_ordering(498) 00:16:21.410 fused_ordering(499) 00:16:21.410 fused_ordering(500) 00:16:21.410 fused_ordering(501) 00:16:21.410 fused_ordering(502) 00:16:21.410 fused_ordering(503) 00:16:21.410 fused_ordering(504) 00:16:21.410 fused_ordering(505) 00:16:21.410 fused_ordering(506) 00:16:21.410 fused_ordering(507) 00:16:21.410 fused_ordering(508) 00:16:21.410 fused_ordering(509) 00:16:21.410 fused_ordering(510) 00:16:21.410 fused_ordering(511) 00:16:21.410 fused_ordering(512) 00:16:21.410 fused_ordering(513) 00:16:21.410 fused_ordering(514) 00:16:21.410 fused_ordering(515) 00:16:21.410 fused_ordering(516) 00:16:21.410 fused_ordering(517) 00:16:21.410 fused_ordering(518) 00:16:21.410 fused_ordering(519) 00:16:21.410 fused_ordering(520) 00:16:21.410 fused_ordering(521) 00:16:21.410 fused_ordering(522) 00:16:21.410 fused_ordering(523) 00:16:21.410 fused_ordering(524) 00:16:21.410 fused_ordering(525) 00:16:21.410 fused_ordering(526) 00:16:21.410 fused_ordering(527) 00:16:21.410 fused_ordering(528) 00:16:21.410 fused_ordering(529) 00:16:21.410 fused_ordering(530) 00:16:21.410 fused_ordering(531) 00:16:21.410 fused_ordering(532) 00:16:21.410 fused_ordering(533) 00:16:21.410 fused_ordering(534) 00:16:21.410 fused_ordering(535) 00:16:21.411 fused_ordering(536) 00:16:21.411 fused_ordering(537) 00:16:21.411 fused_ordering(538) 00:16:21.411 fused_ordering(539) 00:16:21.411 fused_ordering(540) 00:16:21.411 fused_ordering(541) 00:16:21.411 fused_ordering(542) 00:16:21.411 fused_ordering(543) 00:16:21.411 fused_ordering(544) 00:16:21.411 fused_ordering(545) 00:16:21.411 fused_ordering(546) 00:16:21.411 fused_ordering(547) 00:16:21.411 fused_ordering(548) 00:16:21.411 fused_ordering(549) 00:16:21.411 fused_ordering(550) 00:16:21.411 fused_ordering(551) 00:16:21.411 fused_ordering(552) 00:16:21.411 fused_ordering(553) 00:16:21.411 fused_ordering(554) 00:16:21.411 fused_ordering(555) 00:16:21.411 fused_ordering(556) 00:16:21.411 fused_ordering(557) 00:16:21.411 fused_ordering(558) 00:16:21.411 fused_ordering(559) 00:16:21.411 fused_ordering(560) 00:16:21.411 fused_ordering(561) 00:16:21.411 fused_ordering(562) 00:16:21.411 fused_ordering(563) 00:16:21.411 fused_ordering(564) 00:16:21.411 fused_ordering(565) 00:16:21.411 fused_ordering(566) 00:16:21.411 fused_ordering(567) 00:16:21.411 fused_ordering(568) 00:16:21.411 fused_ordering(569) 00:16:21.411 fused_ordering(570) 00:16:21.411 fused_ordering(571) 00:16:21.411 fused_ordering(572) 00:16:21.411 fused_ordering(573) 00:16:21.411 fused_ordering(574) 00:16:21.411 fused_ordering(575) 00:16:21.411 fused_ordering(576) 00:16:21.411 fused_ordering(577) 00:16:21.411 fused_ordering(578) 00:16:21.411 fused_ordering(579) 00:16:21.411 fused_ordering(580) 00:16:21.411 fused_ordering(581) 00:16:21.411 fused_ordering(582) 00:16:21.411 fused_ordering(583) 00:16:21.411 fused_ordering(584) 00:16:21.411 fused_ordering(585) 00:16:21.411 fused_ordering(586) 00:16:21.411 fused_ordering(587) 00:16:21.411 fused_ordering(588) 00:16:21.411 fused_ordering(589) 00:16:21.411 fused_ordering(590) 00:16:21.411 fused_ordering(591) 00:16:21.411 fused_ordering(592) 00:16:21.411 fused_ordering(593) 00:16:21.411 fused_ordering(594) 00:16:21.411 fused_ordering(595) 00:16:21.411 fused_ordering(596) 00:16:21.411 fused_ordering(597) 00:16:21.411 fused_ordering(598) 00:16:21.411 fused_ordering(599) 00:16:21.411 fused_ordering(600) 00:16:21.411 fused_ordering(601) 00:16:21.411 fused_ordering(602) 00:16:21.411 fused_ordering(603) 00:16:21.411 fused_ordering(604) 00:16:21.411 fused_ordering(605) 00:16:21.411 fused_ordering(606) 00:16:21.411 fused_ordering(607) 00:16:21.411 fused_ordering(608) 00:16:21.411 fused_ordering(609) 00:16:21.411 fused_ordering(610) 00:16:21.411 fused_ordering(611) 00:16:21.411 fused_ordering(612) 00:16:21.411 fused_ordering(613) 00:16:21.411 fused_ordering(614) 00:16:21.411 fused_ordering(615) 00:16:21.977 fused_ordering(616) 00:16:21.977 fused_ordering(617) 00:16:21.977 fused_ordering(618) 00:16:21.977 fused_ordering(619) 00:16:21.977 fused_ordering(620) 00:16:21.977 fused_ordering(621) 00:16:21.977 fused_ordering(622) 00:16:21.977 fused_ordering(623) 00:16:21.977 fused_ordering(624) 00:16:21.977 fused_ordering(625) 00:16:21.977 fused_ordering(626) 00:16:21.977 fused_ordering(627) 00:16:21.977 fused_ordering(628) 00:16:21.977 fused_ordering(629) 00:16:21.977 fused_ordering(630) 00:16:21.977 fused_ordering(631) 00:16:21.977 fused_ordering(632) 00:16:21.977 fused_ordering(633) 00:16:21.977 fused_ordering(634) 00:16:21.977 fused_ordering(635) 00:16:21.977 fused_ordering(636) 00:16:21.977 fused_ordering(637) 00:16:21.977 fused_ordering(638) 00:16:21.977 fused_ordering(639) 00:16:21.977 fused_ordering(640) 00:16:21.977 fused_ordering(641) 00:16:21.977 fused_ordering(642) 00:16:21.977 fused_ordering(643) 00:16:21.977 fused_ordering(644) 00:16:21.977 fused_ordering(645) 00:16:21.977 fused_ordering(646) 00:16:21.977 fused_ordering(647) 00:16:21.977 fused_ordering(648) 00:16:21.977 fused_ordering(649) 00:16:21.977 fused_ordering(650) 00:16:21.977 fused_ordering(651) 00:16:21.977 fused_ordering(652) 00:16:21.977 fused_ordering(653) 00:16:21.977 fused_ordering(654) 00:16:21.977 fused_ordering(655) 00:16:21.977 fused_ordering(656) 00:16:21.977 fused_ordering(657) 00:16:21.977 fused_ordering(658) 00:16:21.977 fused_ordering(659) 00:16:21.977 fused_ordering(660) 00:16:21.977 fused_ordering(661) 00:16:21.977 fused_ordering(662) 00:16:21.977 fused_ordering(663) 00:16:21.977 fused_ordering(664) 00:16:21.977 fused_ordering(665) 00:16:21.977 fused_ordering(666) 00:16:21.977 fused_ordering(667) 00:16:21.977 fused_ordering(668) 00:16:21.977 fused_ordering(669) 00:16:21.977 fused_ordering(670) 00:16:21.977 fused_ordering(671) 00:16:21.977 fused_ordering(672) 00:16:21.977 fused_ordering(673) 00:16:21.977 fused_ordering(674) 00:16:21.977 fused_ordering(675) 00:16:21.977 fused_ordering(676) 00:16:21.977 fused_ordering(677) 00:16:21.977 fused_ordering(678) 00:16:21.977 fused_ordering(679) 00:16:21.977 fused_ordering(680) 00:16:21.977 fused_ordering(681) 00:16:21.977 fused_ordering(682) 00:16:21.977 fused_ordering(683) 00:16:21.977 fused_ordering(684) 00:16:21.977 fused_ordering(685) 00:16:21.977 fused_ordering(686) 00:16:21.977 fused_ordering(687) 00:16:21.977 fused_ordering(688) 00:16:21.977 fused_ordering(689) 00:16:21.977 fused_ordering(690) 00:16:21.977 fused_ordering(691) 00:16:21.977 fused_ordering(692) 00:16:21.977 fused_ordering(693) 00:16:21.977 fused_ordering(694) 00:16:21.977 fused_ordering(695) 00:16:21.977 fused_ordering(696) 00:16:21.977 fused_ordering(697) 00:16:21.977 fused_ordering(698) 00:16:21.977 fused_ordering(699) 00:16:21.977 fused_ordering(700) 00:16:21.977 fused_ordering(701) 00:16:21.977 fused_ordering(702) 00:16:21.977 fused_ordering(703) 00:16:21.977 fused_ordering(704) 00:16:21.977 fused_ordering(705) 00:16:21.977 fused_ordering(706) 00:16:21.977 fused_ordering(707) 00:16:21.977 fused_ordering(708) 00:16:21.977 fused_ordering(709) 00:16:21.977 fused_ordering(710) 00:16:21.977 fused_ordering(711) 00:16:21.977 fused_ordering(712) 00:16:21.977 fused_ordering(713) 00:16:21.977 fused_ordering(714) 00:16:21.977 fused_ordering(715) 00:16:21.977 fused_ordering(716) 00:16:21.977 fused_ordering(717) 00:16:21.977 fused_ordering(718) 00:16:21.977 fused_ordering(719) 00:16:21.977 fused_ordering(720) 00:16:21.977 fused_ordering(721) 00:16:21.977 fused_ordering(722) 00:16:21.977 fused_ordering(723) 00:16:21.977 fused_ordering(724) 00:16:21.977 fused_ordering(725) 00:16:21.977 fused_ordering(726) 00:16:21.977 fused_ordering(727) 00:16:21.977 fused_ordering(728) 00:16:21.977 fused_ordering(729) 00:16:21.977 fused_ordering(730) 00:16:21.977 fused_ordering(731) 00:16:21.977 fused_ordering(732) 00:16:21.977 fused_ordering(733) 00:16:21.977 fused_ordering(734) 00:16:21.977 fused_ordering(735) 00:16:21.977 fused_ordering(736) 00:16:21.977 fused_ordering(737) 00:16:21.977 fused_ordering(738) 00:16:21.977 fused_ordering(739) 00:16:21.977 fused_ordering(740) 00:16:21.977 fused_ordering(741) 00:16:21.977 fused_ordering(742) 00:16:21.977 fused_ordering(743) 00:16:21.977 fused_ordering(744) 00:16:21.977 fused_ordering(745) 00:16:21.977 fused_ordering(746) 00:16:21.977 fused_ordering(747) 00:16:21.977 fused_ordering(748) 00:16:21.977 fused_ordering(749) 00:16:21.977 fused_ordering(750) 00:16:21.977 fused_ordering(751) 00:16:21.977 fused_ordering(752) 00:16:21.977 fused_ordering(753) 00:16:21.977 fused_ordering(754) 00:16:21.977 fused_ordering(755) 00:16:21.977 fused_ordering(756) 00:16:21.977 fused_ordering(757) 00:16:21.977 fused_ordering(758) 00:16:21.977 fused_ordering(759) 00:16:21.977 fused_ordering(760) 00:16:21.977 fused_ordering(761) 00:16:21.977 fused_ordering(762) 00:16:21.977 fused_ordering(763) 00:16:21.977 fused_ordering(764) 00:16:21.977 fused_ordering(765) 00:16:21.977 fused_ordering(766) 00:16:21.977 fused_ordering(767) 00:16:21.977 fused_ordering(768) 00:16:21.977 fused_ordering(769) 00:16:21.977 fused_ordering(770) 00:16:21.977 fused_ordering(771) 00:16:21.978 fused_ordering(772) 00:16:21.978 fused_ordering(773) 00:16:21.978 fused_ordering(774) 00:16:21.978 fused_ordering(775) 00:16:21.978 fused_ordering(776) 00:16:21.978 fused_ordering(777) 00:16:21.978 fused_ordering(778) 00:16:21.978 fused_ordering(779) 00:16:21.978 fused_ordering(780) 00:16:21.978 fused_ordering(781) 00:16:21.978 fused_ordering(782) 00:16:21.978 fused_ordering(783) 00:16:21.978 fused_ordering(784) 00:16:21.978 fused_ordering(785) 00:16:21.978 fused_ordering(786) 00:16:21.978 fused_ordering(787) 00:16:21.978 fused_ordering(788) 00:16:21.978 fused_ordering(789) 00:16:21.978 fused_ordering(790) 00:16:21.978 fused_ordering(791) 00:16:21.978 fused_ordering(792) 00:16:21.978 fused_ordering(793) 00:16:21.978 fused_ordering(794) 00:16:21.978 fused_ordering(795) 00:16:21.978 fused_ordering(796) 00:16:21.978 fused_ordering(797) 00:16:21.978 fused_ordering(798) 00:16:21.978 fused_ordering(799) 00:16:21.978 fused_ordering(800) 00:16:21.978 fused_ordering(801) 00:16:21.978 fused_ordering(802) 00:16:21.978 fused_ordering(803) 00:16:21.978 fused_ordering(804) 00:16:21.978 fused_ordering(805) 00:16:21.978 fused_ordering(806) 00:16:21.978 fused_ordering(807) 00:16:21.978 fused_ordering(808) 00:16:21.978 fused_ordering(809) 00:16:21.978 fused_ordering(810) 00:16:21.978 fused_ordering(811) 00:16:21.978 fused_ordering(812) 00:16:21.978 fused_ordering(813) 00:16:21.978 fused_ordering(814) 00:16:21.978 fused_ordering(815) 00:16:21.978 fused_ordering(816) 00:16:21.978 fused_ordering(817) 00:16:21.978 fused_ordering(818) 00:16:21.978 fused_ordering(819) 00:16:21.978 fused_ordering(820) 00:16:22.548 fused_ordering(821) 00:16:22.548 fused_ordering(822) 00:16:22.548 fused_ordering(823) 00:16:22.548 fused_ordering(824) 00:16:22.548 fused_ordering(825) 00:16:22.548 fused_ordering(826) 00:16:22.548 fused_ordering(827) 00:16:22.548 fused_ordering(828) 00:16:22.548 fused_ordering(829) 00:16:22.548 fused_ordering(830) 00:16:22.548 fused_ordering(831) 00:16:22.548 fused_ordering(832) 00:16:22.548 fused_ordering(833) 00:16:22.548 fused_ordering(834) 00:16:22.548 fused_ordering(835) 00:16:22.548 fused_ordering(836) 00:16:22.548 fused_ordering(837) 00:16:22.548 fused_ordering(838) 00:16:22.548 fused_ordering(839) 00:16:22.548 fused_ordering(840) 00:16:22.548 fused_ordering(841) 00:16:22.548 fused_ordering(842) 00:16:22.548 fused_ordering(843) 00:16:22.548 fused_ordering(844) 00:16:22.548 fused_ordering(845) 00:16:22.548 fused_ordering(846) 00:16:22.548 fused_ordering(847) 00:16:22.548 fused_ordering(848) 00:16:22.548 fused_ordering(849) 00:16:22.548 fused_ordering(850) 00:16:22.548 fused_ordering(851) 00:16:22.548 fused_ordering(852) 00:16:22.548 fused_ordering(853) 00:16:22.548 fused_ordering(854) 00:16:22.548 fused_ordering(855) 00:16:22.548 fused_ordering(856) 00:16:22.548 fused_ordering(857) 00:16:22.548 fused_ordering(858) 00:16:22.548 fused_ordering(859) 00:16:22.548 fused_ordering(860) 00:16:22.548 fused_ordering(861) 00:16:22.548 fused_ordering(862) 00:16:22.548 fused_ordering(863) 00:16:22.548 fused_ordering(864) 00:16:22.548 fused_ordering(865) 00:16:22.548 fused_ordering(866) 00:16:22.548 fused_ordering(867) 00:16:22.548 fused_ordering(868) 00:16:22.548 fused_ordering(869) 00:16:22.548 fused_ordering(870) 00:16:22.548 fused_ordering(871) 00:16:22.548 fused_ordering(872) 00:16:22.548 fused_ordering(873) 00:16:22.548 fused_ordering(874) 00:16:22.548 fused_ordering(875) 00:16:22.548 fused_ordering(876) 00:16:22.548 fused_ordering(877) 00:16:22.548 fused_ordering(878) 00:16:22.549 fused_ordering(879) 00:16:22.549 fused_ordering(880) 00:16:22.549 fused_ordering(881) 00:16:22.549 fused_ordering(882) 00:16:22.549 fused_ordering(883) 00:16:22.549 fused_ordering(884) 00:16:22.549 fused_ordering(885) 00:16:22.549 fused_ordering(886) 00:16:22.549 fused_ordering(887) 00:16:22.549 fused_ordering(888) 00:16:22.549 fused_ordering(889) 00:16:22.549 fused_ordering(890) 00:16:22.549 fused_ordering(891) 00:16:22.549 fused_ordering(892) 00:16:22.549 fused_ordering(893) 00:16:22.549 fused_ordering(894) 00:16:22.549 fused_ordering(895) 00:16:22.549 fused_ordering(896) 00:16:22.549 fused_ordering(897) 00:16:22.549 fused_ordering(898) 00:16:22.549 fused_ordering(899) 00:16:22.549 fused_ordering(900) 00:16:22.549 fused_ordering(901) 00:16:22.549 fused_ordering(902) 00:16:22.549 fused_ordering(903) 00:16:22.549 fused_ordering(904) 00:16:22.549 fused_ordering(905) 00:16:22.549 fused_ordering(906) 00:16:22.549 fused_ordering(907) 00:16:22.549 fused_ordering(908) 00:16:22.549 fused_ordering(909) 00:16:22.549 fused_ordering(910) 00:16:22.549 fused_ordering(911) 00:16:22.549 fused_ordering(912) 00:16:22.549 fused_ordering(913) 00:16:22.549 fused_ordering(914) 00:16:22.549 fused_ordering(915) 00:16:22.549 fused_ordering(916) 00:16:22.549 fused_ordering(917) 00:16:22.549 fused_ordering(918) 00:16:22.549 fused_ordering(919) 00:16:22.549 fused_ordering(920) 00:16:22.549 fused_ordering(921) 00:16:22.549 fused_ordering(922) 00:16:22.549 fused_ordering(923) 00:16:22.549 fused_ordering(924) 00:16:22.549 fused_ordering(925) 00:16:22.549 fused_ordering(926) 00:16:22.549 fused_ordering(927) 00:16:22.549 fused_ordering(928) 00:16:22.549 fused_ordering(929) 00:16:22.549 fused_ordering(930) 00:16:22.549 fused_ordering(931) 00:16:22.549 fused_ordering(932) 00:16:22.549 fused_ordering(933) 00:16:22.549 fused_ordering(934) 00:16:22.549 fused_ordering(935) 00:16:22.549 fused_ordering(936) 00:16:22.549 fused_ordering(937) 00:16:22.549 fused_ordering(938) 00:16:22.549 fused_ordering(939) 00:16:22.549 fused_ordering(940) 00:16:22.549 fused_ordering(941) 00:16:22.549 fused_ordering(942) 00:16:22.549 fused_ordering(943) 00:16:22.549 fused_ordering(944) 00:16:22.549 fused_ordering(945) 00:16:22.549 fused_ordering(946) 00:16:22.549 fused_ordering(947) 00:16:22.549 fused_ordering(948) 00:16:22.549 fused_ordering(949) 00:16:22.549 fused_ordering(950) 00:16:22.549 fused_ordering(951) 00:16:22.549 fused_ordering(952) 00:16:22.549 fused_ordering(953) 00:16:22.549 fused_ordering(954) 00:16:22.549 fused_ordering(955) 00:16:22.549 fused_ordering(956) 00:16:22.549 fused_ordering(957) 00:16:22.549 fused_ordering(958) 00:16:22.549 fused_ordering(959) 00:16:22.549 fused_ordering(960) 00:16:22.549 fused_ordering(961) 00:16:22.549 fused_ordering(962) 00:16:22.549 fused_ordering(963) 00:16:22.549 fused_ordering(964) 00:16:22.549 fused_ordering(965) 00:16:22.549 fused_ordering(966) 00:16:22.549 fused_ordering(967) 00:16:22.549 fused_ordering(968) 00:16:22.549 fused_ordering(969) 00:16:22.549 fused_ordering(970) 00:16:22.549 fused_ordering(971) 00:16:22.549 fused_ordering(972) 00:16:22.549 fused_ordering(973) 00:16:22.549 fused_ordering(974) 00:16:22.549 fused_ordering(975) 00:16:22.549 fused_ordering(976) 00:16:22.549 fused_ordering(977) 00:16:22.549 fused_ordering(978) 00:16:22.549 fused_ordering(979) 00:16:22.549 fused_ordering(980) 00:16:22.549 fused_ordering(981) 00:16:22.549 fused_ordering(982) 00:16:22.549 fused_ordering(983) 00:16:22.549 fused_ordering(984) 00:16:22.549 fused_ordering(985) 00:16:22.549 fused_ordering(986) 00:16:22.549 fused_ordering(987) 00:16:22.549 fused_ordering(988) 00:16:22.549 fused_ordering(989) 00:16:22.549 fused_ordering(990) 00:16:22.549 fused_ordering(991) 00:16:22.549 fused_ordering(992) 00:16:22.549 fused_ordering(993) 00:16:22.549 fused_ordering(994) 00:16:22.549 fused_ordering(995) 00:16:22.549 fused_ordering(996) 00:16:22.549 fused_ordering(997) 00:16:22.549 fused_ordering(998) 00:16:22.549 fused_ordering(999) 00:16:22.549 fused_ordering(1000) 00:16:22.549 fused_ordering(1001) 00:16:22.549 fused_ordering(1002) 00:16:22.549 fused_ordering(1003) 00:16:22.549 fused_ordering(1004) 00:16:22.549 fused_ordering(1005) 00:16:22.549 fused_ordering(1006) 00:16:22.549 fused_ordering(1007) 00:16:22.549 fused_ordering(1008) 00:16:22.549 fused_ordering(1009) 00:16:22.549 fused_ordering(1010) 00:16:22.549 fused_ordering(1011) 00:16:22.549 fused_ordering(1012) 00:16:22.549 fused_ordering(1013) 00:16:22.549 fused_ordering(1014) 00:16:22.549 fused_ordering(1015) 00:16:22.549 fused_ordering(1016) 00:16:22.549 fused_ordering(1017) 00:16:22.549 fused_ordering(1018) 00:16:22.549 fused_ordering(1019) 00:16:22.549 fused_ordering(1020) 00:16:22.549 fused_ordering(1021) 00:16:22.549 fused_ordering(1022) 00:16:22.549 fused_ordering(1023) 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:22.549 rmmod nvme_tcp 00:16:22.549 rmmod nvme_fabrics 00:16:22.549 rmmod nvme_keyring 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2208788 ']' 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2208788 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2208788 ']' 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2208788 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.549 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2208788 00:16:22.549 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:22.549 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:22.549 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2208788' 00:16:22.549 killing process with pid 2208788 00:16:22.549 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2208788 00:16:22.549 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2208788 00:16:22.810 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:22.810 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:22.810 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:22.810 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:16:22.810 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:16:22.810 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:22.810 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:16:22.810 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:22.810 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:22.810 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.810 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.810 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:25.348 00:16:25.348 real 0m7.510s 00:16:25.348 user 0m5.037s 00:16:25.348 sys 0m3.152s 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.348 ************************************ 00:16:25.348 END TEST nvmf_fused_ordering 00:16:25.348 ************************************ 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.348 ************************************ 00:16:25.348 START TEST nvmf_ns_masking 00:16:25.348 ************************************ 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:25.348 * Looking for test storage... 00:16:25.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.348 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:25.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.349 --rc genhtml_branch_coverage=1 00:16:25.349 --rc genhtml_function_coverage=1 00:16:25.349 --rc genhtml_legend=1 00:16:25.349 --rc geninfo_all_blocks=1 00:16:25.349 --rc geninfo_unexecuted_blocks=1 00:16:25.349 00:16:25.349 ' 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:25.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.349 --rc genhtml_branch_coverage=1 00:16:25.349 --rc genhtml_function_coverage=1 00:16:25.349 --rc genhtml_legend=1 00:16:25.349 --rc geninfo_all_blocks=1 00:16:25.349 --rc geninfo_unexecuted_blocks=1 00:16:25.349 00:16:25.349 ' 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:25.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.349 --rc genhtml_branch_coverage=1 00:16:25.349 --rc genhtml_function_coverage=1 00:16:25.349 --rc genhtml_legend=1 00:16:25.349 --rc geninfo_all_blocks=1 00:16:25.349 --rc geninfo_unexecuted_blocks=1 00:16:25.349 00:16:25.349 ' 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:25.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.349 --rc genhtml_branch_coverage=1 00:16:25.349 --rc genhtml_function_coverage=1 00:16:25.349 --rc genhtml_legend=1 00:16:25.349 --rc geninfo_all_blocks=1 00:16:25.349 --rc geninfo_unexecuted_blocks=1 00:16:25.349 00:16:25.349 ' 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:25.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:25.349 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=72bac88a-6f61-44b1-a40b-1e1726caa8b9 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=01007520-677c-43f3-83d3-a2ecb035e00f 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=8b96f636-b63a-4f05-95bc-088aff67c53b 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:16:25.350 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:27.256 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:27.256 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:27.256 Found net devices under 0000:09:00.0: cvl_0_0 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:27.256 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:27.257 Found net devices under 0000:09:00.1: cvl_0_1 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:27.257 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:27.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:16:27.516 00:16:27.516 --- 10.0.0.2 ping statistics --- 00:16:27.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.516 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:27.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:16:27.516 00:16:27.516 --- 10.0.0.1 ping statistics --- 00:16:27.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.516 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2211142 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2211142 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2211142 ']' 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.516 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:27.516 [2024-12-05 13:47:58.954797] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:16:27.516 [2024-12-05 13:47:58.954876] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.516 [2024-12-05 13:47:59.024737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.775 [2024-12-05 13:47:59.079553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.775 [2024-12-05 13:47:59.079602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.775 [2024-12-05 13:47:59.079632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.775 [2024-12-05 13:47:59.079645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.775 [2024-12-05 13:47:59.079654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.775 [2024-12-05 13:47:59.080260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.775 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.775 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:27.775 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:27.775 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:27.775 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:27.775 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.775 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:28.033 [2024-12-05 13:47:59.470907] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.033 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:28.033 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:28.033 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:28.309 Malloc1 00:16:28.309 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:28.566 Malloc2 00:16:28.566 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:29.130 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:29.130 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:29.388 [2024-12-05 13:48:00.892972] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.646 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:29.646 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8b96f636-b63a-4f05-95bc-088aff67c53b -a 10.0.0.2 -s 4420 -i 4 00:16:29.646 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:29.646 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:29.646 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:29.646 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:29.646 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:31.541 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:31.541 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:31.541 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:31.541 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:31.541 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.541 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:31.541 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:31.541 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:31.798 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:31.798 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:31.798 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:31.798 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:31.798 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:31.798 [ 0]:0x1 00:16:31.798 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:31.798 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:31.798 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e04db082595c4e3aa79ec4e6e61e2d78 00:16:31.798 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e04db082595c4e3aa79ec4e6e61e2d78 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:31.798 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:32.058 [ 0]:0x1 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e04db082595c4e3aa79ec4e6e61e2d78 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e04db082595c4e3aa79ec4e6e61e2d78 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:32.058 [ 1]:0x2 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7bc568c2775e4c599458fbab3560bd41 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7bc568c2775e4c599458fbab3560bd41 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:32.058 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.316 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.576 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:32.837 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:32.837 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8b96f636-b63a-4f05-95bc-088aff67c53b -a 10.0.0.2 -s 4420 -i 4 00:16:33.098 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:33.098 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:33.098 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:33.098 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:16:33.098 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:16:33.098 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:35.019 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:35.280 [ 0]:0x2 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7bc568c2775e4c599458fbab3560bd41 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7bc568c2775e4c599458fbab3560bd41 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:35.280 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:35.539 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:35.539 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:35.539 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:35.539 [ 0]:0x1 00:16:35.539 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:35.539 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:35.539 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e04db082595c4e3aa79ec4e6e61e2d78 00:16:35.539 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e04db082595c4e3aa79ec4e6e61e2d78 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:35.539 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:35.539 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:35.539 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:35.539 [ 1]:0x2 00:16:35.539 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:35.539 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:35.798 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7bc568c2775e4c599458fbab3560bd41 00:16:35.798 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7bc568c2775e4c599458fbab3560bd41 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:35.798 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:36.056 [ 0]:0x2 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7bc568c2775e4c599458fbab3560bd41 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7bc568c2775e4c599458fbab3560bd41 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:36.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.056 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:36.315 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:36.315 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8b96f636-b63a-4f05-95bc-088aff67c53b -a 10.0.0.2 -s 4420 -i 4 00:16:36.574 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:36.574 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:36.574 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:36.574 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:36.574 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:36.574 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:38.480 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:38.480 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:38.480 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:38.480 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:38.480 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:38.480 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:38.480 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:38.480 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:38.738 [ 0]:0x1 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e04db082595c4e3aa79ec4e6e61e2d78 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e04db082595c4e3aa79ec4e6e61e2d78 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:38.738 [ 1]:0x2 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7bc568c2775e4c599458fbab3560bd41 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7bc568c2775e4c599458fbab3560bd41 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:38.738 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:39.302 [ 0]:0x2 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7bc568c2775e4c599458fbab3560bd41 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7bc568c2775e4c599458fbab3560bd41 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:39.302 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:39.561 [2024-12-05 13:48:10.890806] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:39.561 request: 00:16:39.561 { 00:16:39.561 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:39.561 "nsid": 2, 00:16:39.561 "host": "nqn.2016-06.io.spdk:host1", 00:16:39.561 "method": "nvmf_ns_remove_host", 00:16:39.561 "req_id": 1 00:16:39.561 } 00:16:39.561 Got JSON-RPC error response 00:16:39.561 response: 00:16:39.561 { 00:16:39.561 "code": -32602, 00:16:39.561 "message": "Invalid parameters" 00:16:39.561 } 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:39.561 [ 0]:0x2 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:39.561 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:39.561 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7bc568c2775e4c599458fbab3560bd41 00:16:39.561 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7bc568c2775e4c599458fbab3560bd41 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:39.561 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:39.561 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:39.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.561 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2213380 00:16:39.561 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:39.561 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.561 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2213380 /var/tmp/host.sock 00:16:39.561 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2213380 ']' 00:16:39.561 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:39.561 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.561 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:39.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:39.561 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.561 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:39.819 [2024-12-05 13:48:11.111563] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:16:39.819 [2024-12-05 13:48:11.111648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213380 ] 00:16:39.819 [2024-12-05 13:48:11.177172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.819 [2024-12-05 13:48:11.234957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.078 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.078 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:40.078 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.335 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:40.901 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 72bac88a-6f61-44b1-a40b-1e1726caa8b9 00:16:40.901 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:40.901 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 72BAC88A6F6144B1A40B1E1726CAA8B9 -i 00:16:41.159 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 01007520-677c-43f3-83d3-a2ecb035e00f 00:16:41.159 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:41.159 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 01007520677C43F383D3A2ECB035E00F -i 00:16:41.417 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:41.675 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:41.932 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:41.932 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:42.190 nvme0n1 00:16:42.190 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:42.190 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:42.756 nvme1n2 00:16:42.756 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:42.756 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:42.756 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:42.756 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:42.756 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:43.014 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:43.014 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:43.014 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:43.014 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:43.272 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 72bac88a-6f61-44b1-a40b-1e1726caa8b9 == \7\2\b\a\c\8\8\a\-\6\f\6\1\-\4\4\b\1\-\a\4\0\b\-\1\e\1\7\2\6\c\a\a\8\b\9 ]] 00:16:43.272 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:43.272 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:43.272 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:43.530 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 01007520-677c-43f3-83d3-a2ecb035e00f == \0\1\0\0\7\5\2\0\-\6\7\7\c\-\4\3\f\3\-\8\3\d\3\-\a\2\e\c\b\0\3\5\e\0\0\f ]] 00:16:43.530 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.787 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:44.044 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 72bac88a-6f61-44b1-a40b-1e1726caa8b9 00:16:44.044 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:44.044 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 72BAC88A6F6144B1A40B1E1726CAA8B9 00:16:44.044 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:44.044 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 72BAC88A6F6144B1A40B1E1726CAA8B9 00:16:44.044 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.044 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.044 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.044 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.044 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.044 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.044 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.044 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:44.044 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 72BAC88A6F6144B1A40B1E1726CAA8B9 00:16:44.301 [2024-12-05 13:48:15.764732] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:16:44.301 [2024-12-05 13:48:15.764793] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:16:44.301 [2024-12-05 13:48:15.764811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.301 request: 00:16:44.301 { 00:16:44.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.301 "namespace": { 00:16:44.301 "bdev_name": "invalid", 00:16:44.301 "nsid": 1, 00:16:44.301 "nguid": "72BAC88A6F6144B1A40B1E1726CAA8B9", 00:16:44.301 "no_auto_visible": false, 00:16:44.301 "hide_metadata": false 00:16:44.301 }, 00:16:44.301 "method": "nvmf_subsystem_add_ns", 00:16:44.301 "req_id": 1 00:16:44.301 } 00:16:44.301 Got JSON-RPC error response 00:16:44.301 response: 00:16:44.301 { 00:16:44.301 "code": -32602, 00:16:44.301 "message": "Invalid parameters" 00:16:44.301 } 00:16:44.301 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:44.301 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:44.301 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:44.301 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:44.301 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 72bac88a-6f61-44b1-a40b-1e1726caa8b9 00:16:44.301 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:44.301 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 72BAC88A6F6144B1A40B1E1726CAA8B9 -i 00:16:44.558 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2213380 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2213380 ']' 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2213380 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2213380 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2213380' 00:16:47.129 killing process with pid 2213380 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2213380 00:16:47.129 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2213380 00:16:47.387 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.644 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:47.644 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:16:47.644 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:47.644 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:47.644 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:47.644 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:47.645 rmmod nvme_tcp 00:16:47.645 rmmod nvme_fabrics 00:16:47.645 rmmod nvme_keyring 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2211142 ']' 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2211142 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2211142 ']' 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2211142 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2211142 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2211142' 00:16:47.645 killing process with pid 2211142 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2211142 00:16:47.645 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2211142 00:16:47.903 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:47.903 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:47.903 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:47.903 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:47.903 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:16:47.903 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:47.903 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:16:47.903 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:47.903 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:47.903 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.903 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.903 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:50.442 00:16:50.442 real 0m25.103s 00:16:50.442 user 0m36.299s 00:16:50.442 sys 0m4.738s 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:50.442 ************************************ 00:16:50.442 END TEST nvmf_ns_masking 00:16:50.442 ************************************ 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:50.442 ************************************ 00:16:50.442 START TEST nvmf_nvme_cli 00:16:50.442 ************************************ 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:50.442 * Looking for test storage... 00:16:50.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:50.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.442 --rc genhtml_branch_coverage=1 00:16:50.442 --rc genhtml_function_coverage=1 00:16:50.442 --rc genhtml_legend=1 00:16:50.442 --rc geninfo_all_blocks=1 00:16:50.442 --rc geninfo_unexecuted_blocks=1 00:16:50.442 00:16:50.442 ' 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:50.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.442 --rc genhtml_branch_coverage=1 00:16:50.442 --rc genhtml_function_coverage=1 00:16:50.442 --rc genhtml_legend=1 00:16:50.442 --rc geninfo_all_blocks=1 00:16:50.442 --rc geninfo_unexecuted_blocks=1 00:16:50.442 00:16:50.442 ' 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:50.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.442 --rc genhtml_branch_coverage=1 00:16:50.442 --rc genhtml_function_coverage=1 00:16:50.442 --rc genhtml_legend=1 00:16:50.442 --rc geninfo_all_blocks=1 00:16:50.442 --rc geninfo_unexecuted_blocks=1 00:16:50.442 00:16:50.442 ' 00:16:50.442 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:50.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.442 --rc genhtml_branch_coverage=1 00:16:50.443 --rc genhtml_function_coverage=1 00:16:50.443 --rc genhtml_legend=1 00:16:50.443 --rc geninfo_all_blocks=1 00:16:50.443 --rc geninfo_unexecuted_blocks=1 00:16:50.443 00:16:50.443 ' 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:50.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:16:50.443 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:52.979 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:52.979 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:52.979 Found net devices under 0000:09:00.0: cvl_0_0 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:52.979 Found net devices under 0000:09:00.1: cvl_0_1 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:52.979 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.980 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.980 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:52.980 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:52.980 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.980 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:52.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:16:52.980 00:16:52.980 --- 10.0.0.2 ping statistics --- 00:16:52.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.980 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:16:52.980 00:16:52.980 --- 10.0.0.1 ping statistics --- 00:16:52.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.980 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2216295 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2216295 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2216295 ']' 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:52.980 [2024-12-05 13:48:24.187924] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:16:52.980 [2024-12-05 13:48:24.188023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.980 [2024-12-05 13:48:24.259274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:52.980 [2024-12-05 13:48:24.314225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.980 [2024-12-05 13:48:24.314275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.980 [2024-12-05 13:48:24.314303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.980 [2024-12-05 13:48:24.314314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.980 [2024-12-05 13:48:24.314323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.980 [2024-12-05 13:48:24.316029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.980 [2024-12-05 13:48:24.316104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.980 [2024-12-05 13:48:24.316207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.980 [2024-12-05 13:48:24.316198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:52.980 [2024-12-05 13:48:24.463957] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.980 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.239 Malloc0 00:16:53.239 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.239 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.240 Malloc1 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.240 [2024-12-05 13:48:24.565383] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:16:53.240 00:16:53.240 Discovery Log Number of Records 2, Generation counter 2 00:16:53.240 =====Discovery Log Entry 0====== 00:16:53.240 trtype: tcp 00:16:53.240 adrfam: ipv4 00:16:53.240 subtype: current discovery subsystem 00:16:53.240 treq: not required 00:16:53.240 portid: 0 00:16:53.240 trsvcid: 4420 00:16:53.240 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:53.240 traddr: 10.0.0.2 00:16:53.240 eflags: explicit discovery connections, duplicate discovery information 00:16:53.240 sectype: none 00:16:53.240 =====Discovery Log Entry 1====== 00:16:53.240 trtype: tcp 00:16:53.240 adrfam: ipv4 00:16:53.240 subtype: nvme subsystem 00:16:53.240 treq: not required 00:16:53.240 portid: 0 00:16:53.240 trsvcid: 4420 00:16:53.240 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:53.240 traddr: 10.0.0.2 00:16:53.240 eflags: none 00:16:53.240 sectype: none 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:53.240 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:53.498 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:53.498 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:53.498 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:53.498 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:53.498 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:53.498 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:53.498 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:53.498 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:54.062 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:54.062 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:16:54.062 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:54.062 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:54.062 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:54.062 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:55.962 /dev/nvme0n2 ]] 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:55.962 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:56.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:56.221 rmmod nvme_tcp 00:16:56.221 rmmod nvme_fabrics 00:16:56.221 rmmod nvme_keyring 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2216295 ']' 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2216295 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2216295 ']' 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2216295 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2216295 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2216295' 00:16:56.221 killing process with pid 2216295 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2216295 00:16:56.221 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2216295 00:16:56.480 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:56.480 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:56.480 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:56.480 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:56.480 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:56.480 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:56.480 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:56.480 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:56.480 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:56.480 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.480 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.480 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.016 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:59.016 00:16:59.016 real 0m8.467s 00:16:59.016 user 0m15.052s 00:16:59.016 sys 0m2.440s 00:16:59.016 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.016 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.016 ************************************ 00:16:59.016 END TEST nvmf_nvme_cli 00:16:59.016 ************************************ 00:16:59.016 13:48:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:59.016 13:48:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:59.016 13:48:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:59.016 13:48:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.016 13:48:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:59.016 ************************************ 00:16:59.016 START TEST nvmf_vfio_user 00:16:59.016 ************************************ 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:59.016 * Looking for test storage... 00:16:59.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:59.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.016 --rc genhtml_branch_coverage=1 00:16:59.016 --rc genhtml_function_coverage=1 00:16:59.016 --rc genhtml_legend=1 00:16:59.016 --rc geninfo_all_blocks=1 00:16:59.016 --rc geninfo_unexecuted_blocks=1 00:16:59.016 00:16:59.016 ' 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:59.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.016 --rc genhtml_branch_coverage=1 00:16:59.016 --rc genhtml_function_coverage=1 00:16:59.016 --rc genhtml_legend=1 00:16:59.016 --rc geninfo_all_blocks=1 00:16:59.016 --rc geninfo_unexecuted_blocks=1 00:16:59.016 00:16:59.016 ' 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:59.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.016 --rc genhtml_branch_coverage=1 00:16:59.016 --rc genhtml_function_coverage=1 00:16:59.016 --rc genhtml_legend=1 00:16:59.016 --rc geninfo_all_blocks=1 00:16:59.016 --rc geninfo_unexecuted_blocks=1 00:16:59.016 00:16:59.016 ' 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:59.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.016 --rc genhtml_branch_coverage=1 00:16:59.016 --rc genhtml_function_coverage=1 00:16:59.016 --rc genhtml_legend=1 00:16:59.016 --rc geninfo_all_blocks=1 00:16:59.016 --rc geninfo_unexecuted_blocks=1 00:16:59.016 00:16:59.016 ' 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.016 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2217216 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2217216' 00:16:59.017 Process pid: 2217216 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2217216 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2217216 ']' 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:59.017 [2024-12-05 13:48:30.242286] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:16:59.017 [2024-12-05 13:48:30.242379] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.017 [2024-12-05 13:48:30.314006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:59.017 [2024-12-05 13:48:30.375014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.017 [2024-12-05 13:48:30.375065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.017 [2024-12-05 13:48:30.375094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.017 [2024-12-05 13:48:30.375105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.017 [2024-12-05 13:48:30.375115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.017 [2024-12-05 13:48:30.376830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.017 [2024-12-05 13:48:30.376864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.017 [2024-12-05 13:48:30.376921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.017 [2024-12-05 13:48:30.376924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:59.017 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:00.392 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:00.392 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:00.392 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:00.392 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:00.392 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:00.392 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:00.993 Malloc1 00:17:00.994 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:00.994 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:01.274 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:01.532 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:01.532 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:01.532 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:01.790 Malloc2 00:17:01.790 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:02.047 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:02.304 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:02.870 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:02.870 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:02.870 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:02.870 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:02.870 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:02.870 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:02.870 [2024-12-05 13:48:34.115384] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:17:02.870 [2024-12-05 13:48:34.115450] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217652 ] 00:17:02.870 [2024-12-05 13:48:34.166569] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:02.870 [2024-12-05 13:48:34.171888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:02.870 [2024-12-05 13:48:34.171921] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6bdc8d5000 00:17:02.870 [2024-12-05 13:48:34.172884] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:02.870 [2024-12-05 13:48:34.173882] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:02.870 [2024-12-05 13:48:34.174884] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:02.870 [2024-12-05 13:48:34.175886] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:02.870 [2024-12-05 13:48:34.176893] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:02.870 [2024-12-05 13:48:34.177901] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:02.870 [2024-12-05 13:48:34.178907] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:02.870 [2024-12-05 13:48:34.179910] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:02.870 [2024-12-05 13:48:34.180916] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:02.870 [2024-12-05 13:48:34.180936] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6bdc8ca000 00:17:02.870 [2024-12-05 13:48:34.182055] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:02.870 [2024-12-05 13:48:34.197081] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:02.870 [2024-12-05 13:48:34.197126] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:02.870 [2024-12-05 13:48:34.202049] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:02.870 [2024-12-05 13:48:34.202106] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:02.870 [2024-12-05 13:48:34.202198] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:02.870 [2024-12-05 13:48:34.202225] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:02.870 [2024-12-05 13:48:34.202236] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:02.870 [2024-12-05 13:48:34.203030] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:02.870 [2024-12-05 13:48:34.203054] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:02.870 [2024-12-05 13:48:34.203068] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:02.870 [2024-12-05 13:48:34.204037] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:02.870 [2024-12-05 13:48:34.204055] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:02.870 [2024-12-05 13:48:34.204068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:02.870 [2024-12-05 13:48:34.205044] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:02.870 [2024-12-05 13:48:34.205062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:02.870 [2024-12-05 13:48:34.206049] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:02.870 [2024-12-05 13:48:34.206067] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:02.870 [2024-12-05 13:48:34.206076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:02.870 [2024-12-05 13:48:34.206087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:02.870 [2024-12-05 13:48:34.206197] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:02.871 [2024-12-05 13:48:34.206204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:02.871 [2024-12-05 13:48:34.206212] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:02.871 [2024-12-05 13:48:34.207426] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:02.871 [2024-12-05 13:48:34.208056] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:02.871 [2024-12-05 13:48:34.209069] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:02.871 [2024-12-05 13:48:34.210063] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:02.871 [2024-12-05 13:48:34.210161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:02.871 [2024-12-05 13:48:34.211080] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:02.871 [2024-12-05 13:48:34.211098] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:02.871 [2024-12-05 13:48:34.211106] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211130] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:02.871 [2024-12-05 13:48:34.211143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211173] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:02.871 [2024-12-05 13:48:34.211183] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:02.871 [2024-12-05 13:48:34.211189] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:02.871 [2024-12-05 13:48:34.211207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:02.871 [2024-12-05 13:48:34.211268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:02.871 [2024-12-05 13:48:34.211284] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:02.871 [2024-12-05 13:48:34.211292] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:02.871 [2024-12-05 13:48:34.211299] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:02.871 [2024-12-05 13:48:34.211306] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:02.871 [2024-12-05 13:48:34.211313] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:02.871 [2024-12-05 13:48:34.211320] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:02.871 [2024-12-05 13:48:34.211328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:02.871 [2024-12-05 13:48:34.211372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:02.871 [2024-12-05 13:48:34.211387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:02.871 [2024-12-05 13:48:34.211414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:02.871 [2024-12-05 13:48:34.211435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:02.871 [2024-12-05 13:48:34.211448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:02.871 [2024-12-05 13:48:34.211456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:02.871 [2024-12-05 13:48:34.211506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:02.871 [2024-12-05 13:48:34.211517] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:02.871 [2024-12-05 13:48:34.211525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211539] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:02.871 [2024-12-05 13:48:34.211577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:02.871 [2024-12-05 13:48:34.211643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211672] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:02.871 [2024-12-05 13:48:34.211680] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:02.871 [2024-12-05 13:48:34.211686] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:02.871 [2024-12-05 13:48:34.211710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:02.871 [2024-12-05 13:48:34.211724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:02.871 [2024-12-05 13:48:34.211745] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:02.871 [2024-12-05 13:48:34.211765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211791] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:02.871 [2024-12-05 13:48:34.211798] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:02.871 [2024-12-05 13:48:34.211804] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:02.871 [2024-12-05 13:48:34.211813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:02.871 [2024-12-05 13:48:34.211838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:02.871 [2024-12-05 13:48:34.211853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211883] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:02.871 [2024-12-05 13:48:34.211890] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:02.871 [2024-12-05 13:48:34.211896] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:02.871 [2024-12-05 13:48:34.211905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:02.871 [2024-12-05 13:48:34.211919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:02.871 [2024-12-05 13:48:34.211937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.211996] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:02.871 [2024-12-05 13:48:34.212003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:02.871 [2024-12-05 13:48:34.212011] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:02.871 [2024-12-05 13:48:34.212035] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:02.871 [2024-12-05 13:48:34.212054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:02.871 [2024-12-05 13:48:34.212073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:02.871 [2024-12-05 13:48:34.212084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:02.871 [2024-12-05 13:48:34.212100] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:02.871 [2024-12-05 13:48:34.212114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:02.871 [2024-12-05 13:48:34.212130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:02.872 [2024-12-05 13:48:34.212141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:02.872 [2024-12-05 13:48:34.212162] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:02.872 [2024-12-05 13:48:34.212172] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:02.872 [2024-12-05 13:48:34.212178] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:02.872 [2024-12-05 13:48:34.212187] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:02.872 [2024-12-05 13:48:34.212193] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:02.872 [2024-12-05 13:48:34.212202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:02.872 [2024-12-05 13:48:34.212213] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:02.872 [2024-12-05 13:48:34.212221] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:02.872 [2024-12-05 13:48:34.212227] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:02.872 [2024-12-05 13:48:34.212235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:02.872 [2024-12-05 13:48:34.212246] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:02.872 [2024-12-05 13:48:34.212254] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:02.872 [2024-12-05 13:48:34.212259] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:02.872 [2024-12-05 13:48:34.212268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:02.872 [2024-12-05 13:48:34.212279] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:02.872 [2024-12-05 13:48:34.212287] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:02.872 [2024-12-05 13:48:34.212293] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:02.872 [2024-12-05 13:48:34.212301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:02.872 [2024-12-05 13:48:34.212312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:02.872 [2024-12-05 13:48:34.212334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:02.872 [2024-12-05 13:48:34.212352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:02.872 [2024-12-05 13:48:34.212364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:02.872 ===================================================== 00:17:02.872 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:02.872 ===================================================== 00:17:02.872 Controller Capabilities/Features 00:17:02.872 ================================ 00:17:02.872 Vendor ID: 4e58 00:17:02.872 Subsystem Vendor ID: 4e58 00:17:02.872 Serial Number: SPDK1 00:17:02.872 Model Number: SPDK bdev Controller 00:17:02.872 Firmware Version: 25.01 00:17:02.872 Recommended Arb Burst: 6 00:17:02.872 IEEE OUI Identifier: 8d 6b 50 00:17:02.872 Multi-path I/O 00:17:02.872 May have multiple subsystem ports: Yes 00:17:02.872 May have multiple controllers: Yes 00:17:02.872 Associated with SR-IOV VF: No 00:17:02.872 Max Data Transfer Size: 131072 00:17:02.872 Max Number of Namespaces: 32 00:17:02.872 Max Number of I/O Queues: 127 00:17:02.872 NVMe Specification Version (VS): 1.3 00:17:02.872 NVMe Specification Version (Identify): 1.3 00:17:02.872 Maximum Queue Entries: 256 00:17:02.872 Contiguous Queues Required: Yes 00:17:02.872 Arbitration Mechanisms Supported 00:17:02.872 Weighted Round Robin: Not Supported 00:17:02.872 Vendor Specific: Not Supported 00:17:02.872 Reset Timeout: 15000 ms 00:17:02.872 Doorbell Stride: 4 bytes 00:17:02.872 NVM Subsystem Reset: Not Supported 00:17:02.872 Command Sets Supported 00:17:02.872 NVM Command Set: Supported 00:17:02.872 Boot Partition: Not Supported 00:17:02.872 Memory Page Size Minimum: 4096 bytes 00:17:02.872 Memory Page Size Maximum: 4096 bytes 00:17:02.872 Persistent Memory Region: Not Supported 00:17:02.872 Optional Asynchronous Events Supported 00:17:02.872 Namespace Attribute Notices: Supported 00:17:02.872 Firmware Activation Notices: Not Supported 00:17:02.872 ANA Change Notices: Not Supported 00:17:02.872 PLE Aggregate Log Change Notices: Not Supported 00:17:02.872 LBA Status Info Alert Notices: Not Supported 00:17:02.872 EGE Aggregate Log Change Notices: Not Supported 00:17:02.872 Normal NVM Subsystem Shutdown event: Not Supported 00:17:02.872 Zone Descriptor Change Notices: Not Supported 00:17:02.872 Discovery Log Change Notices: Not Supported 00:17:02.872 Controller Attributes 00:17:02.872 128-bit Host Identifier: Supported 00:17:02.872 Non-Operational Permissive Mode: Not Supported 00:17:02.872 NVM Sets: Not Supported 00:17:02.872 Read Recovery Levels: Not Supported 00:17:02.872 Endurance Groups: Not Supported 00:17:02.872 Predictable Latency Mode: Not Supported 00:17:02.872 Traffic Based Keep ALive: Not Supported 00:17:02.872 Namespace Granularity: Not Supported 00:17:02.872 SQ Associations: Not Supported 00:17:02.872 UUID List: Not Supported 00:17:02.872 Multi-Domain Subsystem: Not Supported 00:17:02.872 Fixed Capacity Management: Not Supported 00:17:02.872 Variable Capacity Management: Not Supported 00:17:02.872 Delete Endurance Group: Not Supported 00:17:02.872 Delete NVM Set: Not Supported 00:17:02.872 Extended LBA Formats Supported: Not Supported 00:17:02.872 Flexible Data Placement Supported: Not Supported 00:17:02.872 00:17:02.872 Controller Memory Buffer Support 00:17:02.872 ================================ 00:17:02.872 Supported: No 00:17:02.872 00:17:02.872 Persistent Memory Region Support 00:17:02.872 ================================ 00:17:02.872 Supported: No 00:17:02.872 00:17:02.872 Admin Command Set Attributes 00:17:02.872 ============================ 00:17:02.872 Security Send/Receive: Not Supported 00:17:02.872 Format NVM: Not Supported 00:17:02.872 Firmware Activate/Download: Not Supported 00:17:02.872 Namespace Management: Not Supported 00:17:02.872 Device Self-Test: Not Supported 00:17:02.872 Directives: Not Supported 00:17:02.872 NVMe-MI: Not Supported 00:17:02.872 Virtualization Management: Not Supported 00:17:02.872 Doorbell Buffer Config: Not Supported 00:17:02.872 Get LBA Status Capability: Not Supported 00:17:02.872 Command & Feature Lockdown Capability: Not Supported 00:17:02.872 Abort Command Limit: 4 00:17:02.872 Async Event Request Limit: 4 00:17:02.872 Number of Firmware Slots: N/A 00:17:02.872 Firmware Slot 1 Read-Only: N/A 00:17:02.872 Firmware Activation Without Reset: N/A 00:17:02.872 Multiple Update Detection Support: N/A 00:17:02.872 Firmware Update Granularity: No Information Provided 00:17:02.872 Per-Namespace SMART Log: No 00:17:02.872 Asymmetric Namespace Access Log Page: Not Supported 00:17:02.872 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:02.872 Command Effects Log Page: Supported 00:17:02.872 Get Log Page Extended Data: Supported 00:17:02.872 Telemetry Log Pages: Not Supported 00:17:02.872 Persistent Event Log Pages: Not Supported 00:17:02.872 Supported Log Pages Log Page: May Support 00:17:02.872 Commands Supported & Effects Log Page: Not Supported 00:17:02.872 Feature Identifiers & Effects Log Page:May Support 00:17:02.872 NVMe-MI Commands & Effects Log Page: May Support 00:17:02.872 Data Area 4 for Telemetry Log: Not Supported 00:17:02.872 Error Log Page Entries Supported: 128 00:17:02.872 Keep Alive: Supported 00:17:02.872 Keep Alive Granularity: 10000 ms 00:17:02.872 00:17:02.872 NVM Command Set Attributes 00:17:02.872 ========================== 00:17:02.872 Submission Queue Entry Size 00:17:02.872 Max: 64 00:17:02.872 Min: 64 00:17:02.872 Completion Queue Entry Size 00:17:02.872 Max: 16 00:17:02.872 Min: 16 00:17:02.872 Number of Namespaces: 32 00:17:02.872 Compare Command: Supported 00:17:02.872 Write Uncorrectable Command: Not Supported 00:17:02.872 Dataset Management Command: Supported 00:17:02.872 Write Zeroes Command: Supported 00:17:02.872 Set Features Save Field: Not Supported 00:17:02.872 Reservations: Not Supported 00:17:02.872 Timestamp: Not Supported 00:17:02.872 Copy: Supported 00:17:02.872 Volatile Write Cache: Present 00:17:02.872 Atomic Write Unit (Normal): 1 00:17:02.872 Atomic Write Unit (PFail): 1 00:17:02.872 Atomic Compare & Write Unit: 1 00:17:02.872 Fused Compare & Write: Supported 00:17:02.872 Scatter-Gather List 00:17:02.872 SGL Command Set: Supported (Dword aligned) 00:17:02.872 SGL Keyed: Not Supported 00:17:02.872 SGL Bit Bucket Descriptor: Not Supported 00:17:02.872 SGL Metadata Pointer: Not Supported 00:17:02.872 Oversized SGL: Not Supported 00:17:02.872 SGL Metadata Address: Not Supported 00:17:02.872 SGL Offset: Not Supported 00:17:02.872 Transport SGL Data Block: Not Supported 00:17:02.872 Replay Protected Memory Block: Not Supported 00:17:02.872 00:17:02.872 Firmware Slot Information 00:17:02.872 ========================= 00:17:02.872 Active slot: 1 00:17:02.872 Slot 1 Firmware Revision: 25.01 00:17:02.872 00:17:02.872 00:17:02.873 Commands Supported and Effects 00:17:02.873 ============================== 00:17:02.873 Admin Commands 00:17:02.873 -------------- 00:17:02.873 Get Log Page (02h): Supported 00:17:02.873 Identify (06h): Supported 00:17:02.873 Abort (08h): Supported 00:17:02.873 Set Features (09h): Supported 00:17:02.873 Get Features (0Ah): Supported 00:17:02.873 Asynchronous Event Request (0Ch): Supported 00:17:02.873 Keep Alive (18h): Supported 00:17:02.873 I/O Commands 00:17:02.873 ------------ 00:17:02.873 Flush (00h): Supported LBA-Change 00:17:02.873 Write (01h): Supported LBA-Change 00:17:02.873 Read (02h): Supported 00:17:02.873 Compare (05h): Supported 00:17:02.873 Write Zeroes (08h): Supported LBA-Change 00:17:02.873 Dataset Management (09h): Supported LBA-Change 00:17:02.873 Copy (19h): Supported LBA-Change 00:17:02.873 00:17:02.873 Error Log 00:17:02.873 ========= 00:17:02.873 00:17:02.873 Arbitration 00:17:02.873 =========== 00:17:02.873 Arbitration Burst: 1 00:17:02.873 00:17:02.873 Power Management 00:17:02.873 ================ 00:17:02.873 Number of Power States: 1 00:17:02.873 Current Power State: Power State #0 00:17:02.873 Power State #0: 00:17:02.873 Max Power: 0.00 W 00:17:02.873 Non-Operational State: Operational 00:17:02.873 Entry Latency: Not Reported 00:17:02.873 Exit Latency: Not Reported 00:17:02.873 Relative Read Throughput: 0 00:17:02.873 Relative Read Latency: 0 00:17:02.873 Relative Write Throughput: 0 00:17:02.873 Relative Write Latency: 0 00:17:02.873 Idle Power: Not Reported 00:17:02.873 Active Power: Not Reported 00:17:02.873 Non-Operational Permissive Mode: Not Supported 00:17:02.873 00:17:02.873 Health Information 00:17:02.873 ================== 00:17:02.873 Critical Warnings: 00:17:02.873 Available Spare Space: OK 00:17:02.873 Temperature: OK 00:17:02.873 Device Reliability: OK 00:17:02.873 Read Only: No 00:17:02.873 Volatile Memory Backup: OK 00:17:02.873 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:02.873 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:02.873 Available Spare: 0% 00:17:02.873 Available Sp[2024-12-05 13:48:34.212509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:02.873 [2024-12-05 13:48:34.212527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:02.873 [2024-12-05 13:48:34.212572] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:02.873 [2024-12-05 13:48:34.212590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.873 [2024-12-05 13:48:34.212602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.873 [2024-12-05 13:48:34.212612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.873 [2024-12-05 13:48:34.212622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.873 [2024-12-05 13:48:34.216428] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:02.873 [2024-12-05 13:48:34.216450] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:02.873 [2024-12-05 13:48:34.217107] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:02.873 [2024-12-05 13:48:34.217185] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:02.873 [2024-12-05 13:48:34.217198] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:02.873 [2024-12-05 13:48:34.218117] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:02.873 [2024-12-05 13:48:34.218140] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:02.873 [2024-12-05 13:48:34.218193] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:02.873 [2024-12-05 13:48:34.220156] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:02.873 are Threshold: 0% 00:17:02.873 Life Percentage Used: 0% 00:17:02.873 Data Units Read: 0 00:17:02.873 Data Units Written: 0 00:17:02.873 Host Read Commands: 0 00:17:02.873 Host Write Commands: 0 00:17:02.873 Controller Busy Time: 0 minutes 00:17:02.873 Power Cycles: 0 00:17:02.873 Power On Hours: 0 hours 00:17:02.873 Unsafe Shutdowns: 0 00:17:02.873 Unrecoverable Media Errors: 0 00:17:02.873 Lifetime Error Log Entries: 0 00:17:02.873 Warning Temperature Time: 0 minutes 00:17:02.873 Critical Temperature Time: 0 minutes 00:17:02.873 00:17:02.873 Number of Queues 00:17:02.873 ================ 00:17:02.873 Number of I/O Submission Queues: 127 00:17:02.873 Number of I/O Completion Queues: 127 00:17:02.873 00:17:02.873 Active Namespaces 00:17:02.873 ================= 00:17:02.873 Namespace ID:1 00:17:02.873 Error Recovery Timeout: Unlimited 00:17:02.873 Command Set Identifier: NVM (00h) 00:17:02.873 Deallocate: Supported 00:17:02.873 Deallocated/Unwritten Error: Not Supported 00:17:02.873 Deallocated Read Value: Unknown 00:17:02.873 Deallocate in Write Zeroes: Not Supported 00:17:02.873 Deallocated Guard Field: 0xFFFF 00:17:02.873 Flush: Supported 00:17:02.873 Reservation: Supported 00:17:02.873 Namespace Sharing Capabilities: Multiple Controllers 00:17:02.873 Size (in LBAs): 131072 (0GiB) 00:17:02.873 Capacity (in LBAs): 131072 (0GiB) 00:17:02.873 Utilization (in LBAs): 131072 (0GiB) 00:17:02.873 NGUID: 2320CD43EE6D4090B717343266390023 00:17:02.873 UUID: 2320cd43-ee6d-4090-b717-343266390023 00:17:02.873 Thin Provisioning: Not Supported 00:17:02.873 Per-NS Atomic Units: Yes 00:17:02.873 Atomic Boundary Size (Normal): 0 00:17:02.873 Atomic Boundary Size (PFail): 0 00:17:02.873 Atomic Boundary Offset: 0 00:17:02.873 Maximum Single Source Range Length: 65535 00:17:02.873 Maximum Copy Length: 65535 00:17:02.873 Maximum Source Range Count: 1 00:17:02.873 NGUID/EUI64 Never Reused: No 00:17:02.873 Namespace Write Protected: No 00:17:02.873 Number of LBA Formats: 1 00:17:02.873 Current LBA Format: LBA Format #00 00:17:02.873 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:02.873 00:17:02.873 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:03.131 [2024-12-05 13:48:34.474360] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:08.414 Initializing NVMe Controllers 00:17:08.414 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:08.414 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:08.414 Initialization complete. Launching workers. 00:17:08.414 ======================================================== 00:17:08.414 Latency(us) 00:17:08.414 Device Information : IOPS MiB/s Average min max 00:17:08.414 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 31195.38 121.86 4104.86 1214.87 11318.59 00:17:08.414 ======================================================== 00:17:08.414 Total : 31195.38 121.86 4104.86 1214.87 11318.59 00:17:08.414 00:17:08.414 [2024-12-05 13:48:39.495774] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:08.414 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:08.414 [2024-12-05 13:48:39.752997] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:13.679 Initializing NVMe Controllers 00:17:13.679 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:13.679 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:13.679 Initialization complete. Launching workers. 00:17:13.679 ======================================================== 00:17:13.679 Latency(us) 00:17:13.679 Device Information : IOPS MiB/s Average min max 00:17:13.679 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7984.19 4968.37 11991.32 00:17:13.679 ======================================================== 00:17:13.679 Total : 16051.20 62.70 7984.19 4968.37 11991.32 00:17:13.679 00:17:13.679 [2024-12-05 13:48:44.792939] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:13.680 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:13.680 [2024-12-05 13:48:45.012965] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:18.943 [2024-12-05 13:48:50.069696] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:18.943 Initializing NVMe Controllers 00:17:18.943 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:18.943 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:18.943 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:18.943 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:18.943 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:18.943 Initialization complete. Launching workers. 00:17:18.943 Starting thread on core 2 00:17:18.943 Starting thread on core 3 00:17:18.943 Starting thread on core 1 00:17:18.943 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:18.943 [2024-12-05 13:48:50.400959] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:22.230 [2024-12-05 13:48:53.466371] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:22.230 Initializing NVMe Controllers 00:17:22.230 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:22.230 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:22.230 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:22.230 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:22.230 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:22.230 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:22.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:22.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:22.230 Initialization complete. Launching workers. 00:17:22.230 Starting thread on core 1 with urgent priority queue 00:17:22.230 Starting thread on core 2 with urgent priority queue 00:17:22.230 Starting thread on core 3 with urgent priority queue 00:17:22.230 Starting thread on core 0 with urgent priority queue 00:17:22.230 SPDK bdev Controller (SPDK1 ) core 0: 5727.33 IO/s 17.46 secs/100000 ios 00:17:22.230 SPDK bdev Controller (SPDK1 ) core 1: 4630.67 IO/s 21.60 secs/100000 ios 00:17:22.230 SPDK bdev Controller (SPDK1 ) core 2: 6061.00 IO/s 16.50 secs/100000 ios 00:17:22.230 SPDK bdev Controller (SPDK1 ) core 3: 6037.67 IO/s 16.56 secs/100000 ios 00:17:22.230 ======================================================== 00:17:22.230 00:17:22.230 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:22.488 [2024-12-05 13:48:53.770195] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:22.488 Initializing NVMe Controllers 00:17:22.488 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:22.488 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:22.488 Namespace ID: 1 size: 0GB 00:17:22.488 Initialization complete. 00:17:22.488 INFO: using host memory buffer for IO 00:17:22.488 Hello world! 00:17:22.488 [2024-12-05 13:48:53.803819] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:22.488 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:22.745 [2024-12-05 13:48:54.113271] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:23.679 Initializing NVMe Controllers 00:17:23.679 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:23.679 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:23.679 Initialization complete. Launching workers. 00:17:23.679 submit (in ns) avg, min, max = 6758.0, 3541.1, 4015194.4 00:17:23.679 complete (in ns) avg, min, max = 29354.3, 2068.9, 4028218.9 00:17:23.679 00:17:23.679 Submit histogram 00:17:23.679 ================ 00:17:23.679 Range in us Cumulative Count 00:17:23.679 3.532 - 3.556: 0.0868% ( 11) 00:17:23.679 3.556 - 3.579: 2.3982% ( 293) 00:17:23.679 3.579 - 3.603: 9.5535% ( 907) 00:17:23.679 3.603 - 3.627: 20.7242% ( 1416) 00:17:23.679 3.627 - 3.650: 31.9265% ( 1420) 00:17:23.679 3.650 - 3.674: 40.9987% ( 1150) 00:17:23.679 3.674 - 3.698: 47.9015% ( 875) 00:17:23.679 3.698 - 3.721: 54.6545% ( 856) 00:17:23.679 3.721 - 3.745: 60.1215% ( 693) 00:17:23.679 3.745 - 3.769: 64.5156% ( 557) 00:17:23.679 3.769 - 3.793: 68.6810% ( 528) 00:17:23.679 3.793 - 3.816: 71.6788% ( 380) 00:17:23.679 3.816 - 3.840: 74.7949% ( 395) 00:17:23.679 3.840 - 3.864: 78.8182% ( 510) 00:17:23.679 3.864 - 3.887: 82.7233% ( 495) 00:17:23.679 3.887 - 3.911: 85.6974% ( 377) 00:17:23.679 3.911 - 3.935: 87.9299% ( 283) 00:17:23.679 3.935 - 3.959: 89.7523% ( 231) 00:17:23.679 3.959 - 3.982: 91.5352% ( 226) 00:17:23.679 3.982 - 4.006: 93.0420% ( 191) 00:17:23.679 4.006 - 4.030: 94.2016% ( 147) 00:17:23.679 4.030 - 4.053: 95.2114% ( 128) 00:17:23.679 4.053 - 4.077: 95.8504% ( 81) 00:17:23.679 4.077 - 4.101: 96.3395% ( 62) 00:17:23.679 4.101 - 4.124: 96.5762% ( 30) 00:17:23.679 4.124 - 4.148: 96.7419% ( 21) 00:17:23.679 4.148 - 4.172: 96.8523% ( 14) 00:17:23.679 4.172 - 4.196: 96.9864% ( 17) 00:17:23.679 4.196 - 4.219: 97.0653% ( 10) 00:17:23.679 4.219 - 4.243: 97.1363% ( 9) 00:17:23.679 4.243 - 4.267: 97.2310% ( 12) 00:17:23.679 4.267 - 4.290: 97.3493% ( 15) 00:17:23.679 4.290 - 4.314: 97.4361% ( 11) 00:17:23.679 4.314 - 4.338: 97.4913% ( 7) 00:17:23.679 4.338 - 4.361: 97.5229% ( 4) 00:17:23.679 4.361 - 4.385: 97.5387% ( 2) 00:17:23.679 4.385 - 4.409: 97.5544% ( 2) 00:17:23.679 4.409 - 4.433: 97.5623% ( 1) 00:17:23.679 4.433 - 4.456: 97.5702% ( 1) 00:17:23.679 4.456 - 4.480: 97.5860% ( 2) 00:17:23.679 4.504 - 4.527: 97.6018% ( 2) 00:17:23.679 4.527 - 4.551: 97.6097% ( 1) 00:17:23.679 4.551 - 4.575: 97.6175% ( 1) 00:17:23.679 4.575 - 4.599: 97.6333% ( 2) 00:17:23.679 4.599 - 4.622: 97.6491% ( 2) 00:17:23.679 4.622 - 4.646: 97.6964% ( 6) 00:17:23.679 4.646 - 4.670: 97.7201% ( 3) 00:17:23.679 4.670 - 4.693: 97.7911% ( 9) 00:17:23.679 4.693 - 4.717: 97.8542% ( 8) 00:17:23.679 4.717 - 4.741: 97.8858% ( 4) 00:17:23.679 4.741 - 4.764: 97.9331% ( 6) 00:17:23.679 4.764 - 4.788: 98.0120% ( 10) 00:17:23.679 4.788 - 4.812: 98.1067% ( 12) 00:17:23.679 4.812 - 4.836: 98.1382% ( 4) 00:17:23.679 4.836 - 4.859: 98.1855% ( 6) 00:17:23.679 4.859 - 4.883: 98.2408% ( 7) 00:17:23.679 4.883 - 4.907: 98.2565% ( 2) 00:17:23.679 4.907 - 4.930: 98.2723% ( 2) 00:17:23.679 4.930 - 4.954: 98.2960% ( 3) 00:17:23.679 4.954 - 4.978: 98.3118% ( 2) 00:17:23.679 4.978 - 5.001: 98.3197% ( 1) 00:17:23.679 5.001 - 5.025: 98.3354% ( 2) 00:17:23.679 5.025 - 5.049: 98.3433% ( 1) 00:17:23.679 5.049 - 5.073: 98.3591% ( 2) 00:17:23.679 5.073 - 5.096: 98.4064% ( 6) 00:17:23.679 5.096 - 5.120: 98.4222% ( 2) 00:17:23.679 5.120 - 5.144: 98.4301% ( 1) 00:17:23.679 5.144 - 5.167: 98.4380% ( 1) 00:17:23.679 5.239 - 5.262: 98.4459% ( 1) 00:17:23.679 5.286 - 5.310: 98.4538% ( 1) 00:17:23.679 5.404 - 5.428: 98.4774% ( 3) 00:17:23.679 5.713 - 5.736: 98.4853% ( 1) 00:17:23.679 5.807 - 5.831: 98.5011% ( 2) 00:17:23.679 5.926 - 5.950: 98.5090% ( 1) 00:17:23.679 5.950 - 5.973: 98.5169% ( 1) 00:17:23.679 5.973 - 5.997: 98.5248% ( 1) 00:17:23.679 5.997 - 6.021: 98.5327% ( 1) 00:17:23.679 6.305 - 6.353: 98.5405% ( 1) 00:17:23.679 6.447 - 6.495: 98.5484% ( 1) 00:17:23.679 7.064 - 7.111: 98.5563% ( 1) 00:17:23.679 7.206 - 7.253: 98.5642% ( 1) 00:17:23.679 7.301 - 7.348: 98.5800% ( 2) 00:17:23.679 7.396 - 7.443: 98.5879% ( 1) 00:17:23.679 7.538 - 7.585: 98.6037% ( 2) 00:17:23.679 7.585 - 7.633: 98.6115% ( 1) 00:17:23.679 7.680 - 7.727: 98.6194% ( 1) 00:17:23.679 7.727 - 7.775: 98.6352% ( 2) 00:17:23.679 7.870 - 7.917: 98.6431% ( 1) 00:17:23.679 7.964 - 8.012: 98.6510% ( 1) 00:17:23.679 8.059 - 8.107: 98.6589% ( 1) 00:17:23.679 8.107 - 8.154: 98.6668% ( 1) 00:17:23.679 8.391 - 8.439: 98.6747% ( 1) 00:17:23.679 8.439 - 8.486: 98.6825% ( 1) 00:17:23.679 8.581 - 8.628: 98.6904% ( 1) 00:17:23.679 8.628 - 8.676: 98.6983% ( 1) 00:17:23.679 8.676 - 8.723: 98.7062% ( 1) 00:17:23.679 8.723 - 8.770: 98.7299% ( 3) 00:17:23.679 8.770 - 8.818: 98.7378% ( 1) 00:17:23.679 8.960 - 9.007: 98.7457% ( 1) 00:17:23.679 9.055 - 9.102: 98.7536% ( 1) 00:17:23.679 9.150 - 9.197: 98.7614% ( 1) 00:17:23.679 9.197 - 9.244: 98.7693% ( 1) 00:17:23.679 9.292 - 9.339: 98.7772% ( 1) 00:17:23.679 9.387 - 9.434: 98.7930% ( 2) 00:17:23.679 9.434 - 9.481: 98.8009% ( 1) 00:17:23.679 9.529 - 9.576: 98.8088% ( 1) 00:17:23.679 9.671 - 9.719: 98.8246% ( 2) 00:17:23.679 9.813 - 9.861: 98.8403% ( 2) 00:17:23.679 10.050 - 10.098: 98.8482% ( 1) 00:17:23.679 10.098 - 10.145: 98.8561% ( 1) 00:17:23.679 10.145 - 10.193: 98.8640% ( 1) 00:17:23.679 10.524 - 10.572: 98.8877% ( 3) 00:17:23.679 10.572 - 10.619: 98.8956% ( 1) 00:17:23.679 11.520 - 11.567: 98.9034% ( 1) 00:17:23.679 12.421 - 12.516: 98.9113% ( 1) 00:17:23.679 12.990 - 13.084: 98.9192% ( 1) 00:17:23.679 13.369 - 13.464: 98.9271% ( 1) 00:17:23.679 14.033 - 14.127: 98.9350% ( 1) 00:17:23.679 14.696 - 14.791: 98.9508% ( 2) 00:17:23.679 14.886 - 14.981: 98.9587% ( 1) 00:17:23.679 15.265 - 15.360: 98.9666% ( 1) 00:17:23.679 15.455 - 15.550: 98.9744% ( 1) 00:17:23.679 17.067 - 17.161: 98.9823% ( 1) 00:17:23.679 17.256 - 17.351: 98.9902% ( 1) 00:17:23.679 17.351 - 17.446: 99.0139% ( 3) 00:17:23.679 17.446 - 17.541: 99.0454% ( 4) 00:17:23.679 17.541 - 17.636: 99.0770% ( 4) 00:17:23.679 17.636 - 17.730: 99.1164% ( 5) 00:17:23.679 17.730 - 17.825: 99.1638% ( 6) 00:17:23.679 17.825 - 17.920: 99.2584% ( 12) 00:17:23.679 17.920 - 18.015: 99.3531% ( 12) 00:17:23.679 18.015 - 18.110: 99.4162% ( 8) 00:17:23.679 18.110 - 18.204: 99.4636% ( 6) 00:17:23.679 18.204 - 18.299: 99.4951% ( 4) 00:17:23.679 18.299 - 18.394: 99.5898% ( 12) 00:17:23.679 18.394 - 18.489: 99.6608% ( 9) 00:17:23.679 18.489 - 18.584: 99.7081% ( 6) 00:17:23.679 18.584 - 18.679: 99.7633% ( 7) 00:17:23.679 18.679 - 18.773: 99.7949% ( 4) 00:17:23.679 18.773 - 18.868: 99.8107% ( 2) 00:17:23.679 18.868 - 18.963: 99.8343% ( 3) 00:17:23.679 19.058 - 19.153: 99.8580% ( 3) 00:17:23.679 19.153 - 19.247: 99.8659% ( 1) 00:17:23.679 19.437 - 19.532: 99.8738% ( 1) 00:17:23.679 19.532 - 19.627: 99.8817% ( 1) 00:17:23.679 22.850 - 22.945: 99.8896% ( 1) 00:17:23.679 23.799 - 23.893: 99.8974% ( 1) 00:17:23.679 23.988 - 24.083: 99.9053% ( 1) 00:17:23.679 24.841 - 25.031: 99.9132% ( 1) 00:17:23.679 26.927 - 27.117: 99.9211% ( 1) 00:17:23.679 27.307 - 27.496: 99.9290% ( 1) 00:17:23.679 3980.705 - 4004.978: 99.9921% ( 8) 00:17:23.679 4004.978 - 4029.250: 100.0000% ( 1) 00:17:23.679 00:17:23.679 Complete histogram 00:17:23.679 ================== 00:17:23.679 Range in us Cumulative Count 00:17:23.679 2.062 - 2.074: 0.8993% ( 114) 00:17:23.679 2.074 - 2.086: 32.1079% ( 3956) 00:17:23.679 2.086 - 2.098: 44.6829% ( 1594) 00:17:23.679 2.098 - 2.110: 48.4380% ( 476) 00:17:23.679 2.110 - 2.121: 59.7270% ( 1431) 00:17:23.679 2.121 - 2.133: 62.0227% ( 291) 00:17:23.679 2.133 - 2.145: 65.8646% ( 487) 00:17:23.679 2.145 - 2.157: 79.0549% ( 1672) 00:17:23.679 2.157 - 2.169: 82.1553% ( 393) 00:17:23.679 2.169 - 2.181: 84.3168% ( 274) 00:17:23.679 2.181 - 2.193: 87.4724% ( 400) 00:17:23.679 2.193 - 2.204: 88.2061% ( 93) 00:17:23.679 2.204 - 2.216: 89.2080% ( 127) 00:17:23.679 2.216 - 2.228: 91.2038% ( 253) 00:17:23.679 2.228 - 2.240: 92.9946% ( 227) 00:17:23.679 2.240 - 2.252: 94.2016% ( 153) 00:17:23.679 2.252 - 2.264: 94.8643% ( 84) 00:17:23.679 2.264 - 2.276: 95.0458% ( 23) 00:17:23.679 2.276 - 2.287: 95.2824% ( 30) 00:17:23.679 2.287 - 2.299: 95.5033% ( 28) 00:17:23.679 2.299 - 2.311: 95.8189% ( 40) 00:17:23.679 2.311 - 2.323: 95.9688% ( 19) 00:17:23.679 2.323 - 2.335: 96.0634% ( 12) 00:17:23.679 2.335 - 2.347: 96.0871% ( 3) 00:17:23.679 2.347 - 2.359: 96.1344% ( 6) 00:17:23.679 2.359 - 2.370: 96.1581% ( 3) 00:17:23.679 2.370 - 2.382: 96.2528% ( 12) 00:17:23.679 2.382 - 2.394: 96.4105% ( 20) 00:17:23.679 2.394 - 2.406: 96.5604% ( 19) 00:17:23.679 2.406 - 2.418: 96.7103% ( 19) 00:17:23.679 2.418 - 2.430: 96.9075% ( 25) 00:17:23.679 2.430 - 2.441: 97.1284% ( 28) 00:17:23.679 2.441 - 2.453: 97.3178% ( 24) 00:17:23.679 2.453 - 2.465: 97.4913% ( 22) 00:17:23.679 2.465 - 2.477: 97.7517% ( 33) 00:17:23.679 2.477 - 2.489: 97.9647% ( 27) 00:17:23.679 2.489 - 2.501: 98.0830% ( 15) 00:17:23.679 2.501 - 2.513: 98.1224% ( 5) 00:17:23.679 2.513 - 2.524: 98.2013% ( 10) 00:17:23.679 2.524 - 2.536: 98.2644% ( 8) 00:17:23.679 2.536 - 2.548: 98.3275% ( 8) 00:17:23.679 2.548 - 2.560: 98.3670% ( 5) 00:17:23.679 2.560 - 2.572: 98.3907% ( 3) 00:17:23.679 2.584 - 2.596: 98.4064% ( 2) 00:17:23.679 2.596 - 2.607: 98.4143% ( 1) 00:17:23.679 2.607 - 2.619: 98.4222% ( 1) 00:17:23.679 2.643 - 2.655: 98.4301% ( 1) 00:17:23.679 2.750 - 2.761: 98.4380% ( 1) 00:17:23.679 2.833 - 2.844: 98.4459% ( 1) 00:17:23.679 2.904 - 2.916: 98.4538% ( 1) 00:17:23.679 3.390 - 3.413: 98.4617% ( 1) 00:17:23.679 3.413 - 3.437: 98.4695% ( 1) 00:17:23.679 3.437 - 3.461: 98.4853% ( 2) 00:17:23.679 3.484 - 3.508: 98.5090% ( 3) 00:17:23.679 3.508 - 3.532: 98.5169% ( 1) 00:17:23.679 3.532 - 3.556: 98.5327% ( 2) 00:17:23.679 3.556 - 3.579: 98.5405% ( 1) 00:17:23.679 3.579 - 3.603: 98.5484% ( 1) 00:17:23.679 3.603 - 3.627: 98.5563% ( 1) 00:17:23.679 3.650 - 3.674: 98.5642% ( 1) 00:17:23.679 3.721 - 3.745: 98.5879% ( 3) 00:17:23.679 3.745 - 3.769: 98.6037% ( 2) 00:17:23.679 3.769 - 3.793: 98.6273% ( 3) 00:17:23.679 3.816 - 3.840: 98.6352% ( 1) 00:17:23.679 3.840 - 3.864: 98.6431% ( 1) 00:17:23.679 3.887 - 3.911: 98.6510% ( 1) 00:17:23.679 3.959 - 3.982: 98.6668% ( 2) 00:17:23.679 4.172 - 4.196: 98.6747% ( 1) 00:17:23.679 5.357 - 5.381: 98.6825% ( 1) 00:17:23.679 6.210 - 6.258: 98.6904% ( 1) 00:17:23.679 6.258 - 6.305: 98.6983% ( 1) 00:17:23.679 6.353 - 6.400: 98.7062% ( 1) 00:17:23.679 6.495 - 6.542: 98.7141% ( 1) 00:17:23.679 6.684 - 6.732: 98.7220% ( 1) 00:17:23.679 6.921 - 6.969: 98.7299% ( 1) 00:17:23.679 7.016 - 7.064: 98.7378% ( 1) 00:17:23.679 7.396 - 7.443: 98.7457% ( 1) 00:17:23.679 7.443 - 7.490: 98.7536% ( 1) 00:17:23.679 7.870 - 7.917: 98.7614% ( 1) 00:17:23.679 8.012 - 8.059: 98.7693% ( 1) 00:17:23.679 8.107 - 8.154: 98.7772% ( 1) 00:17:23.679 8.154 - 8.201: 98.7851% ( 1) 00:17:23.679 8.344 - 8.391: 98.7930% ( 1) 00:17:23.679 8.818 - 8.865: 98.8009% ( 1) 00:17:23.679 9.150 - 9.197: 98.8088% ( 1) 00:17:23.679 15.360 - 15.455: 98.8167% ( 1) 00:17:23.679 15.550 - 15.644: 98.8246% ( 1) 00:17:23.679 15.739 - 15.834: 98.8324% ( 1) 00:17:23.679 15.834 - 15.929: 98.8640% ( 4) 00:17:23.679 15.929 - 16.024: 98.9034% ( 5) 00:17:23.679 16.024 - 16.119: 98.9350% ( 4) 00:17:23.679 16.119 - 16.213: 98.9587% ( 3) 00:17:23.679 16.213 - 16.308: 99.0060% ( 6) 00:17:23.679 16.308 - 16.403: 99.0218% ( 2) 00:17:23.679 16.403 - 16.498: 99.0454% ( 3) 00:17:23.679 16.498 - 16.593: 99.0612% ( 2) 00:17:23.679 16.593 - 16.687: 99.1007% ( 5) 00:17:23.679 16.687 - 16.782: 99.1638% ( 8) 00:17:23.679 16.782 - 16.877: 9[2024-12-05 13:48:55.132293] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:23.679 9.2111% ( 6) 00:17:23.679 16.877 - 16.972: 99.2427% ( 4) 00:17:23.679 16.972 - 17.067: 99.2663% ( 3) 00:17:23.679 17.067 - 17.161: 99.2742% ( 1) 00:17:23.680 17.256 - 17.351: 99.2821% ( 1) 00:17:23.680 17.446 - 17.541: 99.2900% ( 1) 00:17:23.680 17.636 - 17.730: 99.3058% ( 2) 00:17:23.680 17.730 - 17.825: 99.3137% ( 1) 00:17:23.680 17.920 - 18.015: 99.3216% ( 1) 00:17:23.680 3980.705 - 4004.978: 99.8896% ( 72) 00:17:23.680 4004.978 - 4029.250: 100.0000% ( 14) 00:17:23.680 00:17:23.680 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:23.680 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:23.680 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:23.680 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:23.680 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:24.243 [ 00:17:24.243 { 00:17:24.243 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:24.243 "subtype": "Discovery", 00:17:24.243 "listen_addresses": [], 00:17:24.243 "allow_any_host": true, 00:17:24.243 "hosts": [] 00:17:24.243 }, 00:17:24.243 { 00:17:24.243 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:24.243 "subtype": "NVMe", 00:17:24.243 "listen_addresses": [ 00:17:24.243 { 00:17:24.243 "trtype": "VFIOUSER", 00:17:24.243 "adrfam": "IPv4", 00:17:24.243 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:24.243 "trsvcid": "0" 00:17:24.243 } 00:17:24.243 ], 00:17:24.243 "allow_any_host": true, 00:17:24.243 "hosts": [], 00:17:24.243 "serial_number": "SPDK1", 00:17:24.243 "model_number": "SPDK bdev Controller", 00:17:24.243 "max_namespaces": 32, 00:17:24.243 "min_cntlid": 1, 00:17:24.243 "max_cntlid": 65519, 00:17:24.243 "namespaces": [ 00:17:24.243 { 00:17:24.243 "nsid": 1, 00:17:24.243 "bdev_name": "Malloc1", 00:17:24.243 "name": "Malloc1", 00:17:24.243 "nguid": "2320CD43EE6D4090B717343266390023", 00:17:24.243 "uuid": "2320cd43-ee6d-4090-b717-343266390023" 00:17:24.243 } 00:17:24.243 ] 00:17:24.243 }, 00:17:24.243 { 00:17:24.243 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:24.243 "subtype": "NVMe", 00:17:24.243 "listen_addresses": [ 00:17:24.243 { 00:17:24.243 "trtype": "VFIOUSER", 00:17:24.243 "adrfam": "IPv4", 00:17:24.243 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:24.243 "trsvcid": "0" 00:17:24.243 } 00:17:24.243 ], 00:17:24.243 "allow_any_host": true, 00:17:24.243 "hosts": [], 00:17:24.243 "serial_number": "SPDK2", 00:17:24.243 "model_number": "SPDK bdev Controller", 00:17:24.243 "max_namespaces": 32, 00:17:24.243 "min_cntlid": 1, 00:17:24.243 "max_cntlid": 65519, 00:17:24.243 "namespaces": [ 00:17:24.243 { 00:17:24.243 "nsid": 1, 00:17:24.243 "bdev_name": "Malloc2", 00:17:24.243 "name": "Malloc2", 00:17:24.243 "nguid": "3BCAB72220AD4EDCB3D45508C8D04FCB", 00:17:24.243 "uuid": "3bcab722-20ad-4edc-b3d4-5508c8d04fcb" 00:17:24.243 } 00:17:24.243 ] 00:17:24.243 } 00:17:24.243 ] 00:17:24.243 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:24.243 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2220166 00:17:24.243 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:24.243 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:24.243 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:24.243 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:24.243 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:24.243 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:24.243 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:24.243 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:24.243 [2024-12-05 13:48:55.679960] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:24.500 Malloc3 00:17:24.500 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:24.757 [2024-12-05 13:48:56.097135] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:24.757 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:24.757 Asynchronous Event Request test 00:17:24.757 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:24.757 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:24.757 Registering asynchronous event callbacks... 00:17:24.757 Starting namespace attribute notice tests for all controllers... 00:17:24.757 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:24.757 aer_cb - Changed Namespace 00:17:24.757 Cleaning up... 00:17:25.016 [ 00:17:25.016 { 00:17:25.016 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:25.016 "subtype": "Discovery", 00:17:25.016 "listen_addresses": [], 00:17:25.016 "allow_any_host": true, 00:17:25.016 "hosts": [] 00:17:25.016 }, 00:17:25.016 { 00:17:25.016 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:25.016 "subtype": "NVMe", 00:17:25.016 "listen_addresses": [ 00:17:25.016 { 00:17:25.016 "trtype": "VFIOUSER", 00:17:25.016 "adrfam": "IPv4", 00:17:25.016 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:25.016 "trsvcid": "0" 00:17:25.016 } 00:17:25.016 ], 00:17:25.016 "allow_any_host": true, 00:17:25.016 "hosts": [], 00:17:25.016 "serial_number": "SPDK1", 00:17:25.016 "model_number": "SPDK bdev Controller", 00:17:25.016 "max_namespaces": 32, 00:17:25.016 "min_cntlid": 1, 00:17:25.016 "max_cntlid": 65519, 00:17:25.016 "namespaces": [ 00:17:25.016 { 00:17:25.016 "nsid": 1, 00:17:25.016 "bdev_name": "Malloc1", 00:17:25.016 "name": "Malloc1", 00:17:25.016 "nguid": "2320CD43EE6D4090B717343266390023", 00:17:25.016 "uuid": "2320cd43-ee6d-4090-b717-343266390023" 00:17:25.016 }, 00:17:25.016 { 00:17:25.016 "nsid": 2, 00:17:25.016 "bdev_name": "Malloc3", 00:17:25.016 "name": "Malloc3", 00:17:25.016 "nguid": "F65D33F9274E41428572CE6E4031162C", 00:17:25.016 "uuid": "f65d33f9-274e-4142-8572-ce6e4031162c" 00:17:25.016 } 00:17:25.016 ] 00:17:25.016 }, 00:17:25.016 { 00:17:25.016 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:25.016 "subtype": "NVMe", 00:17:25.016 "listen_addresses": [ 00:17:25.016 { 00:17:25.016 "trtype": "VFIOUSER", 00:17:25.016 "adrfam": "IPv4", 00:17:25.016 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:25.016 "trsvcid": "0" 00:17:25.016 } 00:17:25.016 ], 00:17:25.016 "allow_any_host": true, 00:17:25.016 "hosts": [], 00:17:25.016 "serial_number": "SPDK2", 00:17:25.016 "model_number": "SPDK bdev Controller", 00:17:25.016 "max_namespaces": 32, 00:17:25.016 "min_cntlid": 1, 00:17:25.016 "max_cntlid": 65519, 00:17:25.016 "namespaces": [ 00:17:25.016 { 00:17:25.016 "nsid": 1, 00:17:25.016 "bdev_name": "Malloc2", 00:17:25.016 "name": "Malloc2", 00:17:25.016 "nguid": "3BCAB72220AD4EDCB3D45508C8D04FCB", 00:17:25.016 "uuid": "3bcab722-20ad-4edc-b3d4-5508c8d04fcb" 00:17:25.016 } 00:17:25.016 ] 00:17:25.016 } 00:17:25.016 ] 00:17:25.016 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2220166 00:17:25.016 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:25.016 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:25.016 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:25.016 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:25.016 [2024-12-05 13:48:56.399938] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:17:25.016 [2024-12-05 13:48:56.399976] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220302 ] 00:17:25.016 [2024-12-05 13:48:56.449176] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:25.016 [2024-12-05 13:48:56.453527] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:25.016 [2024-12-05 13:48:56.453561] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f401a2dc000 00:17:25.016 [2024-12-05 13:48:56.454520] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.016 [2024-12-05 13:48:56.455523] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.016 [2024-12-05 13:48:56.456530] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.016 [2024-12-05 13:48:56.457540] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:25.016 [2024-12-05 13:48:56.458546] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:25.016 [2024-12-05 13:48:56.459561] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.016 [2024-12-05 13:48:56.460556] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:25.016 [2024-12-05 13:48:56.461573] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.016 [2024-12-05 13:48:56.462576] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:25.016 [2024-12-05 13:48:56.462598] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f401a2d1000 00:17:25.016 [2024-12-05 13:48:56.463731] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:25.016 [2024-12-05 13:48:56.476394] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:25.016 [2024-12-05 13:48:56.480452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:17:25.016 [2024-12-05 13:48:56.482553] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:25.016 [2024-12-05 13:48:56.482607] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:25.016 [2024-12-05 13:48:56.482698] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:17:25.016 [2024-12-05 13:48:56.482745] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:17:25.016 [2024-12-05 13:48:56.482756] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:17:25.016 [2024-12-05 13:48:56.483564] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:25.016 [2024-12-05 13:48:56.483589] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:17:25.016 [2024-12-05 13:48:56.483604] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:17:25.016 [2024-12-05 13:48:56.484572] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:25.016 [2024-12-05 13:48:56.484594] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:17:25.016 [2024-12-05 13:48:56.484608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:25.016 [2024-12-05 13:48:56.485572] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:25.017 [2024-12-05 13:48:56.485593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:25.017 [2024-12-05 13:48:56.486584] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:25.017 [2024-12-05 13:48:56.486604] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:25.017 [2024-12-05 13:48:56.486613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:25.017 [2024-12-05 13:48:56.486625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:25.017 [2024-12-05 13:48:56.486735] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:17:25.017 [2024-12-05 13:48:56.486743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:25.017 [2024-12-05 13:48:56.486751] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:25.017 [2024-12-05 13:48:56.487597] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:25.017 [2024-12-05 13:48:56.488600] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:25.017 [2024-12-05 13:48:56.489606] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:25.017 [2024-12-05 13:48:56.490603] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:25.017 [2024-12-05 13:48:56.490669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:25.017 [2024-12-05 13:48:56.491617] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:25.017 [2024-12-05 13:48:56.491637] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:25.017 [2024-12-05 13:48:56.491647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.491670] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:17:25.017 [2024-12-05 13:48:56.491684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.491731] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:25.017 [2024-12-05 13:48:56.491740] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:25.017 [2024-12-05 13:48:56.491746] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.017 [2024-12-05 13:48:56.491763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:25.017 [2024-12-05 13:48:56.498434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:25.017 [2024-12-05 13:48:56.498456] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:17:25.017 [2024-12-05 13:48:56.498465] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:17:25.017 [2024-12-05 13:48:56.498473] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:17:25.017 [2024-12-05 13:48:56.498481] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:25.017 [2024-12-05 13:48:56.498489] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:17:25.017 [2024-12-05 13:48:56.498497] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:17:25.017 [2024-12-05 13:48:56.498505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.498517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.498533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:25.017 [2024-12-05 13:48:56.506426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:25.017 [2024-12-05 13:48:56.506451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.017 [2024-12-05 13:48:56.506465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.017 [2024-12-05 13:48:56.506483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.017 [2024-12-05 13:48:56.506497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.017 [2024-12-05 13:48:56.506506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.506523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.506539] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:25.017 [2024-12-05 13:48:56.514428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:25.017 [2024-12-05 13:48:56.514446] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:17:25.017 [2024-12-05 13:48:56.514455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.514472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.514483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.514497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:25.017 [2024-12-05 13:48:56.522428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:25.017 [2024-12-05 13:48:56.522507] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.522525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.522538] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:25.017 [2024-12-05 13:48:56.522547] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:25.017 [2024-12-05 13:48:56.522553] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.017 [2024-12-05 13:48:56.522563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:25.017 [2024-12-05 13:48:56.530429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:25.017 [2024-12-05 13:48:56.530457] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:17:25.017 [2024-12-05 13:48:56.530473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.530487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.530500] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:25.017 [2024-12-05 13:48:56.530508] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:25.017 [2024-12-05 13:48:56.530514] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.017 [2024-12-05 13:48:56.530527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:25.017 [2024-12-05 13:48:56.538428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:25.017 [2024-12-05 13:48:56.538451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.538466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:25.017 [2024-12-05 13:48:56.538480] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:25.017 [2024-12-05 13:48:56.538488] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:25.017 [2024-12-05 13:48:56.538494] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.017 [2024-12-05 13:48:56.538503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:25.277 [2024-12-05 13:48:56.546429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:25.277 [2024-12-05 13:48:56.546454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:25.277 [2024-12-05 13:48:56.546468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:25.277 [2024-12-05 13:48:56.546482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:17:25.277 [2024-12-05 13:48:56.546492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:25.277 [2024-12-05 13:48:56.546501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:25.277 [2024-12-05 13:48:56.546509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:17:25.277 [2024-12-05 13:48:56.546517] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:25.277 [2024-12-05 13:48:56.546525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:17:25.277 [2024-12-05 13:48:56.546533] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:17:25.277 [2024-12-05 13:48:56.546557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:25.277 [2024-12-05 13:48:56.554434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:25.277 [2024-12-05 13:48:56.554459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:25.277 [2024-12-05 13:48:56.562443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:25.277 [2024-12-05 13:48:56.562468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:25.277 [2024-12-05 13:48:56.570426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:25.277 [2024-12-05 13:48:56.570452] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:25.277 [2024-12-05 13:48:56.578430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:25.277 [2024-12-05 13:48:56.578462] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:25.277 [2024-12-05 13:48:56.578473] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:25.277 [2024-12-05 13:48:56.578479] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:25.277 [2024-12-05 13:48:56.578484] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:25.277 [2024-12-05 13:48:56.578490] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:25.277 [2024-12-05 13:48:56.578500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:25.277 [2024-12-05 13:48:56.578512] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:25.277 [2024-12-05 13:48:56.578520] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:25.277 [2024-12-05 13:48:56.578526] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.277 [2024-12-05 13:48:56.578535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:25.277 [2024-12-05 13:48:56.578546] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:25.277 [2024-12-05 13:48:56.578553] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:25.277 [2024-12-05 13:48:56.578559] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.277 [2024-12-05 13:48:56.578568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:25.277 [2024-12-05 13:48:56.578580] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:25.277 [2024-12-05 13:48:56.578588] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:25.277 [2024-12-05 13:48:56.578593] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.277 [2024-12-05 13:48:56.578602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:25.277 [2024-12-05 13:48:56.586431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:25.277 [2024-12-05 13:48:56.586459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:25.277 [2024-12-05 13:48:56.586477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:25.277 [2024-12-05 13:48:56.586489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:25.277 ===================================================== 00:17:25.277 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:25.277 ===================================================== 00:17:25.277 Controller Capabilities/Features 00:17:25.277 ================================ 00:17:25.277 Vendor ID: 4e58 00:17:25.277 Subsystem Vendor ID: 4e58 00:17:25.277 Serial Number: SPDK2 00:17:25.277 Model Number: SPDK bdev Controller 00:17:25.277 Firmware Version: 25.01 00:17:25.277 Recommended Arb Burst: 6 00:17:25.277 IEEE OUI Identifier: 8d 6b 50 00:17:25.277 Multi-path I/O 00:17:25.277 May have multiple subsystem ports: Yes 00:17:25.277 May have multiple controllers: Yes 00:17:25.277 Associated with SR-IOV VF: No 00:17:25.277 Max Data Transfer Size: 131072 00:17:25.277 Max Number of Namespaces: 32 00:17:25.277 Max Number of I/O Queues: 127 00:17:25.277 NVMe Specification Version (VS): 1.3 00:17:25.277 NVMe Specification Version (Identify): 1.3 00:17:25.277 Maximum Queue Entries: 256 00:17:25.277 Contiguous Queues Required: Yes 00:17:25.277 Arbitration Mechanisms Supported 00:17:25.277 Weighted Round Robin: Not Supported 00:17:25.277 Vendor Specific: Not Supported 00:17:25.277 Reset Timeout: 15000 ms 00:17:25.277 Doorbell Stride: 4 bytes 00:17:25.277 NVM Subsystem Reset: Not Supported 00:17:25.277 Command Sets Supported 00:17:25.277 NVM Command Set: Supported 00:17:25.277 Boot Partition: Not Supported 00:17:25.277 Memory Page Size Minimum: 4096 bytes 00:17:25.277 Memory Page Size Maximum: 4096 bytes 00:17:25.277 Persistent Memory Region: Not Supported 00:17:25.277 Optional Asynchronous Events Supported 00:17:25.278 Namespace Attribute Notices: Supported 00:17:25.278 Firmware Activation Notices: Not Supported 00:17:25.278 ANA Change Notices: Not Supported 00:17:25.278 PLE Aggregate Log Change Notices: Not Supported 00:17:25.278 LBA Status Info Alert Notices: Not Supported 00:17:25.278 EGE Aggregate Log Change Notices: Not Supported 00:17:25.278 Normal NVM Subsystem Shutdown event: Not Supported 00:17:25.278 Zone Descriptor Change Notices: Not Supported 00:17:25.278 Discovery Log Change Notices: Not Supported 00:17:25.278 Controller Attributes 00:17:25.278 128-bit Host Identifier: Supported 00:17:25.278 Non-Operational Permissive Mode: Not Supported 00:17:25.278 NVM Sets: Not Supported 00:17:25.278 Read Recovery Levels: Not Supported 00:17:25.278 Endurance Groups: Not Supported 00:17:25.278 Predictable Latency Mode: Not Supported 00:17:25.278 Traffic Based Keep ALive: Not Supported 00:17:25.278 Namespace Granularity: Not Supported 00:17:25.278 SQ Associations: Not Supported 00:17:25.278 UUID List: Not Supported 00:17:25.278 Multi-Domain Subsystem: Not Supported 00:17:25.278 Fixed Capacity Management: Not Supported 00:17:25.278 Variable Capacity Management: Not Supported 00:17:25.278 Delete Endurance Group: Not Supported 00:17:25.278 Delete NVM Set: Not Supported 00:17:25.278 Extended LBA Formats Supported: Not Supported 00:17:25.278 Flexible Data Placement Supported: Not Supported 00:17:25.278 00:17:25.278 Controller Memory Buffer Support 00:17:25.278 ================================ 00:17:25.278 Supported: No 00:17:25.278 00:17:25.278 Persistent Memory Region Support 00:17:25.278 ================================ 00:17:25.278 Supported: No 00:17:25.278 00:17:25.278 Admin Command Set Attributes 00:17:25.278 ============================ 00:17:25.278 Security Send/Receive: Not Supported 00:17:25.278 Format NVM: Not Supported 00:17:25.278 Firmware Activate/Download: Not Supported 00:17:25.278 Namespace Management: Not Supported 00:17:25.278 Device Self-Test: Not Supported 00:17:25.278 Directives: Not Supported 00:17:25.278 NVMe-MI: Not Supported 00:17:25.278 Virtualization Management: Not Supported 00:17:25.278 Doorbell Buffer Config: Not Supported 00:17:25.278 Get LBA Status Capability: Not Supported 00:17:25.278 Command & Feature Lockdown Capability: Not Supported 00:17:25.278 Abort Command Limit: 4 00:17:25.278 Async Event Request Limit: 4 00:17:25.278 Number of Firmware Slots: N/A 00:17:25.278 Firmware Slot 1 Read-Only: N/A 00:17:25.278 Firmware Activation Without Reset: N/A 00:17:25.278 Multiple Update Detection Support: N/A 00:17:25.278 Firmware Update Granularity: No Information Provided 00:17:25.278 Per-Namespace SMART Log: No 00:17:25.278 Asymmetric Namespace Access Log Page: Not Supported 00:17:25.278 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:25.278 Command Effects Log Page: Supported 00:17:25.278 Get Log Page Extended Data: Supported 00:17:25.278 Telemetry Log Pages: Not Supported 00:17:25.278 Persistent Event Log Pages: Not Supported 00:17:25.278 Supported Log Pages Log Page: May Support 00:17:25.278 Commands Supported & Effects Log Page: Not Supported 00:17:25.278 Feature Identifiers & Effects Log Page:May Support 00:17:25.278 NVMe-MI Commands & Effects Log Page: May Support 00:17:25.278 Data Area 4 for Telemetry Log: Not Supported 00:17:25.278 Error Log Page Entries Supported: 128 00:17:25.278 Keep Alive: Supported 00:17:25.278 Keep Alive Granularity: 10000 ms 00:17:25.278 00:17:25.278 NVM Command Set Attributes 00:17:25.278 ========================== 00:17:25.278 Submission Queue Entry Size 00:17:25.278 Max: 64 00:17:25.278 Min: 64 00:17:25.278 Completion Queue Entry Size 00:17:25.278 Max: 16 00:17:25.278 Min: 16 00:17:25.278 Number of Namespaces: 32 00:17:25.278 Compare Command: Supported 00:17:25.278 Write Uncorrectable Command: Not Supported 00:17:25.278 Dataset Management Command: Supported 00:17:25.278 Write Zeroes Command: Supported 00:17:25.278 Set Features Save Field: Not Supported 00:17:25.278 Reservations: Not Supported 00:17:25.278 Timestamp: Not Supported 00:17:25.278 Copy: Supported 00:17:25.278 Volatile Write Cache: Present 00:17:25.278 Atomic Write Unit (Normal): 1 00:17:25.278 Atomic Write Unit (PFail): 1 00:17:25.278 Atomic Compare & Write Unit: 1 00:17:25.278 Fused Compare & Write: Supported 00:17:25.278 Scatter-Gather List 00:17:25.278 SGL Command Set: Supported (Dword aligned) 00:17:25.278 SGL Keyed: Not Supported 00:17:25.278 SGL Bit Bucket Descriptor: Not Supported 00:17:25.278 SGL Metadata Pointer: Not Supported 00:17:25.278 Oversized SGL: Not Supported 00:17:25.278 SGL Metadata Address: Not Supported 00:17:25.278 SGL Offset: Not Supported 00:17:25.278 Transport SGL Data Block: Not Supported 00:17:25.278 Replay Protected Memory Block: Not Supported 00:17:25.278 00:17:25.278 Firmware Slot Information 00:17:25.278 ========================= 00:17:25.278 Active slot: 1 00:17:25.278 Slot 1 Firmware Revision: 25.01 00:17:25.278 00:17:25.278 00:17:25.278 Commands Supported and Effects 00:17:25.278 ============================== 00:17:25.278 Admin Commands 00:17:25.278 -------------- 00:17:25.278 Get Log Page (02h): Supported 00:17:25.278 Identify (06h): Supported 00:17:25.278 Abort (08h): Supported 00:17:25.278 Set Features (09h): Supported 00:17:25.278 Get Features (0Ah): Supported 00:17:25.278 Asynchronous Event Request (0Ch): Supported 00:17:25.278 Keep Alive (18h): Supported 00:17:25.278 I/O Commands 00:17:25.278 ------------ 00:17:25.278 Flush (00h): Supported LBA-Change 00:17:25.278 Write (01h): Supported LBA-Change 00:17:25.278 Read (02h): Supported 00:17:25.278 Compare (05h): Supported 00:17:25.278 Write Zeroes (08h): Supported LBA-Change 00:17:25.278 Dataset Management (09h): Supported LBA-Change 00:17:25.278 Copy (19h): Supported LBA-Change 00:17:25.278 00:17:25.278 Error Log 00:17:25.278 ========= 00:17:25.278 00:17:25.278 Arbitration 00:17:25.278 =========== 00:17:25.278 Arbitration Burst: 1 00:17:25.278 00:17:25.278 Power Management 00:17:25.278 ================ 00:17:25.278 Number of Power States: 1 00:17:25.278 Current Power State: Power State #0 00:17:25.278 Power State #0: 00:17:25.278 Max Power: 0.00 W 00:17:25.278 Non-Operational State: Operational 00:17:25.278 Entry Latency: Not Reported 00:17:25.278 Exit Latency: Not Reported 00:17:25.278 Relative Read Throughput: 0 00:17:25.278 Relative Read Latency: 0 00:17:25.278 Relative Write Throughput: 0 00:17:25.278 Relative Write Latency: 0 00:17:25.278 Idle Power: Not Reported 00:17:25.278 Active Power: Not Reported 00:17:25.278 Non-Operational Permissive Mode: Not Supported 00:17:25.278 00:17:25.278 Health Information 00:17:25.278 ================== 00:17:25.278 Critical Warnings: 00:17:25.278 Available Spare Space: OK 00:17:25.278 Temperature: OK 00:17:25.278 Device Reliability: OK 00:17:25.278 Read Only: No 00:17:25.278 Volatile Memory Backup: OK 00:17:25.278 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:25.278 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:25.278 Available Spare: 0% 00:17:25.278 Available Sp[2024-12-05 13:48:56.586605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:25.278 [2024-12-05 13:48:56.594428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:25.278 [2024-12-05 13:48:56.594477] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:17:25.279 [2024-12-05 13:48:56.594495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.279 [2024-12-05 13:48:56.594506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.279 [2024-12-05 13:48:56.594515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.279 [2024-12-05 13:48:56.594528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.279 [2024-12-05 13:48:56.594615] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:25.279 [2024-12-05 13:48:56.594636] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:25.279 [2024-12-05 13:48:56.595615] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:25.279 [2024-12-05 13:48:56.595686] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:17:25.279 [2024-12-05 13:48:56.595701] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:17:25.279 [2024-12-05 13:48:56.596626] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:25.279 [2024-12-05 13:48:56.596650] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:17:25.279 [2024-12-05 13:48:56.596702] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:25.279 [2024-12-05 13:48:56.599433] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:25.279 are Threshold: 0% 00:17:25.279 Life Percentage Used: 0% 00:17:25.279 Data Units Read: 0 00:17:25.279 Data Units Written: 0 00:17:25.279 Host Read Commands: 0 00:17:25.279 Host Write Commands: 0 00:17:25.279 Controller Busy Time: 0 minutes 00:17:25.279 Power Cycles: 0 00:17:25.279 Power On Hours: 0 hours 00:17:25.279 Unsafe Shutdowns: 0 00:17:25.279 Unrecoverable Media Errors: 0 00:17:25.279 Lifetime Error Log Entries: 0 00:17:25.279 Warning Temperature Time: 0 minutes 00:17:25.279 Critical Temperature Time: 0 minutes 00:17:25.279 00:17:25.279 Number of Queues 00:17:25.279 ================ 00:17:25.279 Number of I/O Submission Queues: 127 00:17:25.279 Number of I/O Completion Queues: 127 00:17:25.279 00:17:25.279 Active Namespaces 00:17:25.279 ================= 00:17:25.279 Namespace ID:1 00:17:25.279 Error Recovery Timeout: Unlimited 00:17:25.279 Command Set Identifier: NVM (00h) 00:17:25.279 Deallocate: Supported 00:17:25.279 Deallocated/Unwritten Error: Not Supported 00:17:25.279 Deallocated Read Value: Unknown 00:17:25.279 Deallocate in Write Zeroes: Not Supported 00:17:25.279 Deallocated Guard Field: 0xFFFF 00:17:25.279 Flush: Supported 00:17:25.279 Reservation: Supported 00:17:25.279 Namespace Sharing Capabilities: Multiple Controllers 00:17:25.279 Size (in LBAs): 131072 (0GiB) 00:17:25.279 Capacity (in LBAs): 131072 (0GiB) 00:17:25.279 Utilization (in LBAs): 131072 (0GiB) 00:17:25.279 NGUID: 3BCAB72220AD4EDCB3D45508C8D04FCB 00:17:25.279 UUID: 3bcab722-20ad-4edc-b3d4-5508c8d04fcb 00:17:25.279 Thin Provisioning: Not Supported 00:17:25.279 Per-NS Atomic Units: Yes 00:17:25.279 Atomic Boundary Size (Normal): 0 00:17:25.279 Atomic Boundary Size (PFail): 0 00:17:25.279 Atomic Boundary Offset: 0 00:17:25.279 Maximum Single Source Range Length: 65535 00:17:25.279 Maximum Copy Length: 65535 00:17:25.279 Maximum Source Range Count: 1 00:17:25.279 NGUID/EUI64 Never Reused: No 00:17:25.279 Namespace Write Protected: No 00:17:25.279 Number of LBA Formats: 1 00:17:25.279 Current LBA Format: LBA Format #00 00:17:25.279 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:25.279 00:17:25.279 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:25.537 [2024-12-05 13:48:56.838269] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:30.803 Initializing NVMe Controllers 00:17:30.803 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:30.803 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:30.803 Initialization complete. Launching workers. 00:17:30.803 ======================================================== 00:17:30.803 Latency(us) 00:17:30.803 Device Information : IOPS MiB/s Average min max 00:17:30.803 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31190.94 121.84 4103.14 1232.68 7632.08 00:17:30.803 ======================================================== 00:17:30.803 Total : 31190.94 121.84 4103.14 1232.68 7632.08 00:17:30.803 00:17:30.803 [2024-12-05 13:49:01.943762] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:30.803 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:30.803 [2024-12-05 13:49:02.194463] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:36.155 Initializing NVMe Controllers 00:17:36.155 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:36.155 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:36.155 Initialization complete. Launching workers. 00:17:36.155 ======================================================== 00:17:36.155 Latency(us) 00:17:36.155 Device Information : IOPS MiB/s Average min max 00:17:36.155 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 29456.37 115.06 4344.96 1238.33 9873.49 00:17:36.155 ======================================================== 00:17:36.155 Total : 29456.37 115.06 4344.96 1238.33 9873.49 00:17:36.155 00:17:36.155 [2024-12-05 13:49:07.220414] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:36.155 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:36.155 [2024-12-05 13:49:07.451188] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:41.422 [2024-12-05 13:49:12.588573] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:41.422 Initializing NVMe Controllers 00:17:41.422 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:41.422 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:41.422 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:41.422 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:41.422 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:41.422 Initialization complete. Launching workers. 00:17:41.422 Starting thread on core 2 00:17:41.422 Starting thread on core 3 00:17:41.422 Starting thread on core 1 00:17:41.422 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:41.422 [2024-12-05 13:49:12.908928] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:44.701 [2024-12-05 13:49:15.993041] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:44.701 Initializing NVMe Controllers 00:17:44.701 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:44.701 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:44.701 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:44.701 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:44.701 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:44.701 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:44.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:44.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:44.701 Initialization complete. Launching workers. 00:17:44.701 Starting thread on core 1 with urgent priority queue 00:17:44.701 Starting thread on core 2 with urgent priority queue 00:17:44.701 Starting thread on core 3 with urgent priority queue 00:17:44.701 Starting thread on core 0 with urgent priority queue 00:17:44.701 SPDK bdev Controller (SPDK2 ) core 0: 5382.33 IO/s 18.58 secs/100000 ios 00:17:44.701 SPDK bdev Controller (SPDK2 ) core 1: 4805.33 IO/s 20.81 secs/100000 ios 00:17:44.701 SPDK bdev Controller (SPDK2 ) core 2: 5387.00 IO/s 18.56 secs/100000 ios 00:17:44.701 SPDK bdev Controller (SPDK2 ) core 3: 5574.00 IO/s 17.94 secs/100000 ios 00:17:44.701 ======================================================== 00:17:44.701 00:17:44.701 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:44.958 [2024-12-05 13:49:16.312935] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:44.958 Initializing NVMe Controllers 00:17:44.958 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:44.958 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:44.958 Namespace ID: 1 size: 0GB 00:17:44.958 Initialization complete. 00:17:44.958 INFO: using host memory buffer for IO 00:17:44.958 Hello world! 00:17:44.958 [2024-12-05 13:49:16.323000] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:44.958 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:45.215 [2024-12-05 13:49:16.626716] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:46.589 Initializing NVMe Controllers 00:17:46.589 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:46.589 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:46.589 Initialization complete. Launching workers. 00:17:46.589 submit (in ns) avg, min, max = 7337.0, 3510.0, 4019490.0 00:17:46.589 complete (in ns) avg, min, max = 28798.3, 2072.2, 4020257.8 00:17:46.589 00:17:46.589 Submit histogram 00:17:46.589 ================ 00:17:46.589 Range in us Cumulative Count 00:17:46.589 3.508 - 3.532: 0.5232% ( 67) 00:17:46.589 3.532 - 3.556: 1.8896% ( 175) 00:17:46.589 3.556 - 3.579: 6.3715% ( 574) 00:17:46.589 3.579 - 3.603: 12.7743% ( 820) 00:17:46.589 3.603 - 3.627: 21.4024% ( 1105) 00:17:46.589 3.627 - 3.650: 29.1247% ( 989) 00:17:46.589 3.650 - 3.674: 38.2447% ( 1168) 00:17:46.589 3.674 - 3.698: 45.2175% ( 893) 00:17:46.589 3.698 - 3.721: 53.6738% ( 1083) 00:17:46.589 3.721 - 3.745: 58.6788% ( 641) 00:17:46.589 3.745 - 3.769: 63.0671% ( 562) 00:17:46.589 3.769 - 3.793: 66.6667% ( 461) 00:17:46.589 3.793 - 3.816: 70.3443% ( 471) 00:17:46.589 3.816 - 3.840: 74.1470% ( 487) 00:17:46.589 3.840 - 3.864: 77.9886% ( 492) 00:17:46.589 3.864 - 3.887: 81.2759% ( 421) 00:17:46.589 3.887 - 3.911: 84.3523% ( 394) 00:17:46.589 3.911 - 3.935: 87.0774% ( 349) 00:17:46.589 3.935 - 3.959: 88.9826% ( 244) 00:17:46.589 3.959 - 3.982: 90.7629% ( 228) 00:17:46.589 3.982 - 4.006: 92.3089% ( 198) 00:17:46.589 4.006 - 4.030: 93.3864% ( 138) 00:17:46.589 4.030 - 4.053: 94.3625% ( 125) 00:17:46.589 4.053 - 4.077: 95.0262% ( 85) 00:17:46.589 4.077 - 4.101: 95.6664% ( 82) 00:17:46.589 4.101 - 4.124: 96.0959% ( 55) 00:17:46.589 4.124 - 4.148: 96.3770% ( 36) 00:17:46.589 4.148 - 4.172: 96.6268% ( 32) 00:17:46.589 4.172 - 4.196: 96.7596% ( 17) 00:17:46.589 4.196 - 4.219: 96.9938% ( 30) 00:17:46.589 4.219 - 4.243: 97.0797% ( 11) 00:17:46.589 4.243 - 4.267: 97.1734% ( 12) 00:17:46.589 4.267 - 4.290: 97.2671% ( 12) 00:17:46.589 4.290 - 4.314: 97.3764% ( 14) 00:17:46.589 4.314 - 4.338: 97.4545% ( 10) 00:17:46.589 4.338 - 4.361: 97.5092% ( 7) 00:17:46.589 4.361 - 4.385: 97.5326% ( 3) 00:17:46.589 4.385 - 4.409: 97.5716% ( 5) 00:17:46.589 4.409 - 4.433: 97.5873% ( 2) 00:17:46.589 4.433 - 4.456: 97.5951% ( 1) 00:17:46.589 4.456 - 4.480: 97.6185% ( 3) 00:17:46.589 4.480 - 4.504: 97.6263% ( 1) 00:17:46.589 4.504 - 4.527: 97.6575% ( 4) 00:17:46.589 4.575 - 4.599: 97.6653% ( 1) 00:17:46.589 4.599 - 4.622: 97.6810% ( 2) 00:17:46.589 4.670 - 4.693: 97.6966% ( 2) 00:17:46.589 4.741 - 4.764: 97.7200% ( 3) 00:17:46.589 4.764 - 4.788: 97.7668% ( 6) 00:17:46.589 4.788 - 4.812: 97.8059% ( 5) 00:17:46.589 4.812 - 4.836: 97.8293% ( 3) 00:17:46.589 4.836 - 4.859: 97.8840% ( 7) 00:17:46.589 4.859 - 4.883: 97.9230% ( 5) 00:17:46.589 4.883 - 4.907: 97.9699% ( 6) 00:17:46.589 4.907 - 4.930: 98.0011% ( 4) 00:17:46.589 4.930 - 4.954: 98.0558% ( 7) 00:17:46.589 4.954 - 4.978: 98.1026% ( 6) 00:17:46.589 4.978 - 5.001: 98.1338% ( 4) 00:17:46.589 5.001 - 5.025: 98.1573% ( 3) 00:17:46.589 5.025 - 5.049: 98.1807% ( 3) 00:17:46.589 5.049 - 5.073: 98.2197% ( 5) 00:17:46.589 5.073 - 5.096: 98.2666% ( 6) 00:17:46.589 5.096 - 5.120: 98.2978% ( 4) 00:17:46.589 5.120 - 5.144: 98.3681% ( 9) 00:17:46.589 5.144 - 5.167: 98.3759% ( 1) 00:17:46.589 5.167 - 5.191: 98.4149% ( 5) 00:17:46.589 5.191 - 5.215: 98.4462% ( 4) 00:17:46.589 5.215 - 5.239: 98.4696% ( 3) 00:17:46.589 5.239 - 5.262: 98.4852% ( 2) 00:17:46.589 5.310 - 5.333: 98.5086% ( 3) 00:17:46.589 5.333 - 5.357: 98.5242% ( 2) 00:17:46.589 5.404 - 5.428: 98.5399% ( 2) 00:17:46.589 5.428 - 5.452: 98.5477% ( 1) 00:17:46.589 5.452 - 5.476: 98.5555% ( 1) 00:17:46.589 5.594 - 5.618: 98.5633% ( 1) 00:17:46.589 5.760 - 5.784: 98.5711% ( 1) 00:17:46.589 5.807 - 5.831: 98.5789% ( 1) 00:17:46.589 5.855 - 5.879: 98.5867% ( 1) 00:17:46.589 5.926 - 5.950: 98.5945% ( 1) 00:17:46.589 5.950 - 5.973: 98.6023% ( 1) 00:17:46.589 6.044 - 6.068: 98.6101% ( 1) 00:17:46.589 6.258 - 6.305: 98.6179% ( 1) 00:17:46.589 6.542 - 6.590: 98.6336% ( 2) 00:17:46.589 6.732 - 6.779: 98.6492% ( 2) 00:17:46.589 6.874 - 6.921: 98.6570% ( 1) 00:17:46.589 7.016 - 7.064: 98.6648% ( 1) 00:17:46.589 7.064 - 7.111: 98.6882% ( 3) 00:17:46.589 7.111 - 7.159: 98.7038% ( 2) 00:17:46.589 7.490 - 7.538: 98.7116% ( 1) 00:17:46.589 7.870 - 7.917: 98.7195% ( 1) 00:17:46.589 7.917 - 7.964: 98.7273% ( 1) 00:17:46.589 8.012 - 8.059: 98.7351% ( 1) 00:17:46.589 8.059 - 8.107: 98.7507% ( 2) 00:17:46.589 8.201 - 8.249: 98.7663% ( 2) 00:17:46.589 8.439 - 8.486: 98.7897% ( 3) 00:17:46.589 8.533 - 8.581: 98.8053% ( 2) 00:17:46.589 8.676 - 8.723: 98.8131% ( 1) 00:17:46.589 9.055 - 9.102: 98.8210% ( 1) 00:17:46.589 9.434 - 9.481: 98.8288% ( 1) 00:17:46.589 9.481 - 9.529: 98.8366% ( 1) 00:17:46.589 9.719 - 9.766: 98.8444% ( 1) 00:17:46.589 9.861 - 9.908: 98.8522% ( 1) 00:17:46.589 10.145 - 10.193: 98.8600% ( 1) 00:17:46.589 10.193 - 10.240: 98.8678% ( 1) 00:17:46.589 10.240 - 10.287: 98.8756% ( 1) 00:17:46.589 10.287 - 10.335: 98.8834% ( 1) 00:17:46.589 10.335 - 10.382: 98.9068% ( 3) 00:17:46.589 10.524 - 10.572: 98.9147% ( 1) 00:17:46.589 10.761 - 10.809: 98.9303% ( 2) 00:17:46.589 11.236 - 11.283: 98.9381% ( 1) 00:17:46.589 11.330 - 11.378: 98.9459% ( 1) 00:17:46.589 11.425 - 11.473: 98.9537% ( 1) 00:17:46.589 11.520 - 11.567: 98.9615% ( 1) 00:17:46.589 11.710 - 11.757: 98.9771% ( 2) 00:17:46.589 11.804 - 11.852: 98.9849% ( 1) 00:17:46.589 11.899 - 11.947: 98.9927% ( 1) 00:17:46.589 11.947 - 11.994: 99.0005% ( 1) 00:17:46.589 12.231 - 12.326: 99.0084% ( 1) 00:17:46.589 12.516 - 12.610: 99.0162% ( 1) 00:17:46.589 12.800 - 12.895: 99.0240% ( 1) 00:17:46.589 12.895 - 12.990: 99.0318% ( 1) 00:17:46.589 13.084 - 13.179: 99.0396% ( 1) 00:17:46.589 13.179 - 13.274: 99.0474% ( 1) 00:17:46.589 13.274 - 13.369: 99.0552% ( 1) 00:17:46.589 13.369 - 13.464: 99.0630% ( 1) 00:17:46.589 13.653 - 13.748: 99.0708% ( 1) 00:17:46.589 13.843 - 13.938: 99.0864% ( 2) 00:17:46.589 13.938 - 14.033: 99.0942% ( 1) 00:17:46.589 14.127 - 14.222: 99.1021% ( 1) 00:17:46.589 14.317 - 14.412: 99.1099% ( 1) 00:17:46.589 14.601 - 14.696: 99.1177% ( 1) 00:17:46.589 14.791 - 14.886: 99.1255% ( 1) 00:17:46.589 15.170 - 15.265: 99.1333% ( 1) 00:17:46.589 15.455 - 15.550: 99.1411% ( 1) 00:17:46.589 15.929 - 16.024: 99.1489% ( 1) 00:17:46.589 17.161 - 17.256: 99.1879% ( 5) 00:17:46.589 17.256 - 17.351: 99.2114% ( 3) 00:17:46.589 17.351 - 17.446: 99.2192% ( 1) 00:17:46.589 17.446 - 17.541: 99.2348% ( 2) 00:17:46.589 17.541 - 17.636: 99.2816% ( 6) 00:17:46.589 17.636 - 17.730: 99.3207% ( 5) 00:17:46.589 17.730 - 17.825: 99.3519% ( 4) 00:17:46.589 17.825 - 17.920: 99.4222% ( 9) 00:17:46.589 17.920 - 18.015: 99.4300% ( 1) 00:17:46.589 18.015 - 18.110: 99.4690% ( 5) 00:17:46.589 18.110 - 18.204: 99.5003% ( 4) 00:17:46.589 18.204 - 18.299: 99.5393% ( 5) 00:17:46.589 18.299 - 18.394: 99.6252% ( 11) 00:17:46.589 18.394 - 18.489: 99.6799% ( 7) 00:17:46.589 18.489 - 18.584: 99.7189% ( 5) 00:17:46.589 18.584 - 18.679: 99.7423% ( 3) 00:17:46.589 18.679 - 18.773: 99.7579% ( 2) 00:17:46.589 18.773 - 18.868: 99.7892% ( 4) 00:17:46.589 18.868 - 18.963: 99.8126% ( 3) 00:17:46.589 18.963 - 19.058: 99.8204% ( 1) 00:17:46.589 19.058 - 19.153: 99.8282% ( 1) 00:17:46.589 19.437 - 19.532: 99.8360% ( 1) 00:17:46.589 19.721 - 19.816: 99.8438% ( 1) 00:17:46.590 23.040 - 23.135: 99.8516% ( 1) 00:17:46.590 23.324 - 23.419: 99.8595% ( 1) 00:17:46.590 23.799 - 23.893: 99.8751% ( 2) 00:17:46.590 24.462 - 24.652: 99.8829% ( 1) 00:17:46.590 24.841 - 25.031: 99.8907% ( 1) 00:17:46.590 25.600 - 25.790: 99.8985% ( 1) 00:17:46.590 25.979 - 26.169: 99.9063% ( 1) 00:17:46.590 28.065 - 28.255: 99.9141% ( 1) 00:17:46.590 3859.342 - 3883.615: 99.9219% ( 1) 00:17:46.590 3980.705 - 4004.978: 99.9532% ( 4) 00:17:46.590 4004.978 - 4029.250: 100.0000% ( 6) 00:17:46.590 00:17:46.590 Complete histogram 00:17:46.590 ================== 00:17:46.590 Range in us Cumulative Count 00:17:46.590 2.062 - 2.074: 0.0156% ( 2) 00:17:46.590 2.074 - 2.086: 17.2640% ( 2209) 00:17:46.590 2.086 - 2.098: 45.4829% ( 3614) 00:17:46.590 2.098 - 2.110: 48.2002% ( 348) 00:17:46.590 2.110 - 2.121: 56.2193% ( 1027) 00:17:46.590 2.121 - 2.133: 61.2243% ( 641) 00:17:46.590 2.133 - 2.145: 63.0671% ( 236) 00:17:46.590 2.145 - 2.157: 74.5686% ( 1473) 00:17:46.590 2.157 - 2.169: 81.8224% ( 929) 00:17:46.590 2.169 - 2.181: 83.0093% ( 152) 00:17:46.590 2.181 - 2.193: 86.0233% ( 386) 00:17:46.590 2.193 - 2.204: 87.5224% ( 192) 00:17:46.590 2.204 - 2.216: 88.1237% ( 77) 00:17:46.590 2.216 - 2.228: 90.3334% ( 283) 00:17:46.590 2.228 - 2.240: 92.7149% ( 305) 00:17:46.590 2.240 - 2.252: 94.1126% ( 179) 00:17:46.590 2.252 - 2.264: 94.6904% ( 74) 00:17:46.590 2.264 - 2.276: 94.9325% ( 31) 00:17:46.590 2.276 - 2.287: 95.1120% ( 23) 00:17:46.590 2.287 - 2.299: 95.2448% ( 17) 00:17:46.590 2.299 - 2.311: 95.5181% ( 35) 00:17:46.590 2.311 - 2.323: 95.8538% ( 43) 00:17:46.590 2.323 - 2.335: 95.9007% ( 6) 00:17:46.590 2.335 - 2.347: 95.9397% ( 5) 00:17:46.590 2.347 - 2.359: 95.9710% ( 4) 00:17:46.590 2.359 - 2.370: 96.0178% ( 6) 00:17:46.590 2.370 - 2.382: 96.1584% ( 18) 00:17:46.590 2.382 - 2.394: 96.4238% ( 34) 00:17:46.590 2.394 - 2.406: 96.6737% ( 32) 00:17:46.590 2.406 - 2.418: 96.8845% ( 27) 00:17:46.590 2.418 - 2.430: 97.1500% ( 34) 00:17:46.590 2.430 - 2.441: 97.3530% ( 26) 00:17:46.590 2.441 - 2.453: 97.5873% ( 30) 00:17:46.590 2.453 - 2.465: 97.7747% ( 24) 00:17:46.590 2.465 - 2.477: 97.9230% ( 19) 00:17:46.590 2.477 - 2.489: 98.0323% ( 14) 00:17:46.590 2.489 - 2.501: 98.1182% ( 11) 00:17:46.590 2.501 - 2.513: 98.1807% ( 8) 00:17:46.590 2.513 - 2.524: 98.1963% ( 2) 00:17:46.590 2.524 - 2.536: 98.2510% ( 7) 00:17:46.590 2.536 - 2.548: 98.2900% ( 5) 00:17:46.590 2.548 - 2.560: 98.3290% ( 5) 00:17:46.590 2.560 - 2.572: 98.3447% ( 2) 00:17:46.590 2.572 - 2.584: 98.3681% ( 3) 00:17:46.590 2.584 - 2.596: 98.3759% ( 1) 00:17:46.590 2.596 - 2.607: 98.3993% ( 3) 00:17:46.590 2.631 - 2.643: 98.4071% ( 1) 00:17:46.590 2.655 - 2.667: 98.4149% ( 1) 00:17:46.590 2.679 - 2.690: 98.4227% ( 1) 00:17:46.590 2.761 - 2.773: 98.4305% ( 1) 00:17:46.590 2.856 - 2.868: 98.4384% ( 1) 00:17:46.590 3.034 - 3.058: 98.4462% ( 1) 00:17:46.590 3.081 - 3.105: 98.4540% ( 1) 00:17:46.590 3.508 - 3.532: 98.4774% ( 3) 00:17:46.590 3.532 - 3.556: 98.4852% ( 1) 00:17:46.590 3.556 - 3.579: 98.4930% ( 1) 00:17:46.590 3.650 - 3.674: 98.5086% ( 2) 00:17:46.590 3.674 - 3.698: 98.5164% ( 1) 00:17:46.590 3.698 - 3.721: 98.5321% ( 2) 00:17:46.590 3.745 - 3.769: 98.5399% ( 1) 00:17:46.590 3.793 - 3.816: 98.5789% ( 5) 00:17:46.590 3.816 - 3.840: 98.5867% ( 1) 00:17:46.590 3.864 - 3.887: 98.5945% ( 1) 00:17:46.590 3.911 - 3.935: 98.6023% ( 1) 00:17:46.590 4.030 - 4.053: 98.6101% ( 1) 00:17:46.590 4.053 - 4.077: 98.6179% ( 1) 00:17:46.590 4.077 - 4.101: 98.6258% ( 1) 00:17:46.590 5.618 - 5.641: 98.6414% ( 2) 00:17:46.590 5.641 - 5.665: 98.6492% ( 1) 00:17:46.590 5.831 - 5.855: 98.6570% ( 1) 00:17:46.590 5.855 - 5.879: 98.6648% ( 1) 00:17:46.590 5.950 - 5.973: 98.6726% ( 1) 00:17:46.590 5.997 - 6.021: 98.6804% ( 1) 00:17:46.590 6.163 - 6.210: 98.6960% ( 2) 00:17:46.590 6.210 - 6.258: 98.7038% ( 1) 00:17:46.590 6.353 - 6.400: 98.7116% ( 1) 00:17:46.590 6.590 - 6.637: 98.7273% ( 2) 00:17:46.590 6.827 - 6.874: 98.7351% ( 1) 00:17:46.590 6.874 - 6.921: 98.7429% ( 1) 00:17:46.590 6.921 - 6.969: 98.7507% ( 1) 00:17:46.590 7.159 - 7.206: 98.7585% ( 1) 00:17:46.590 7.206 - 7.253: 98.7663% ( 1) 00:17:46.590 7.396 - 7.443: 98.7741% ( 1) 00:17:46.590 7.538 - 7.585: 9[2024-12-05 13:49:17.720146] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:46.590 8.7819% ( 1) 00:17:46.590 7.680 - 7.727: 98.7897% ( 1) 00:17:46.590 8.012 - 8.059: 98.7975% ( 1) 00:17:46.590 8.154 - 8.201: 98.8053% ( 1) 00:17:46.590 8.391 - 8.439: 98.8131% ( 1) 00:17:46.590 9.197 - 9.244: 98.8210% ( 1) 00:17:46.590 9.339 - 9.387: 98.8288% ( 1) 00:17:46.590 10.667 - 10.714: 98.8366% ( 1) 00:17:46.590 15.455 - 15.550: 98.8522% ( 2) 00:17:46.590 15.550 - 15.644: 98.8678% ( 2) 00:17:46.590 15.644 - 15.739: 98.8756% ( 1) 00:17:46.590 15.739 - 15.834: 98.9225% ( 6) 00:17:46.590 15.834 - 15.929: 98.9303% ( 1) 00:17:46.590 15.929 - 16.024: 98.9615% ( 4) 00:17:46.590 16.024 - 16.119: 99.0005% ( 5) 00:17:46.590 16.119 - 16.213: 99.0396% ( 5) 00:17:46.590 16.213 - 16.308: 99.0708% ( 4) 00:17:46.590 16.308 - 16.403: 99.0864% ( 2) 00:17:46.590 16.403 - 16.498: 99.0942% ( 1) 00:17:46.590 16.498 - 16.593: 99.1099% ( 2) 00:17:46.590 16.593 - 16.687: 99.1333% ( 3) 00:17:46.590 16.687 - 16.782: 99.1723% ( 5) 00:17:46.590 16.782 - 16.877: 99.2348% ( 8) 00:17:46.590 16.877 - 16.972: 99.2660% ( 4) 00:17:46.590 16.972 - 17.067: 99.2738% ( 1) 00:17:46.590 17.161 - 17.256: 99.2816% ( 1) 00:17:46.590 17.351 - 17.446: 99.2895% ( 1) 00:17:46.590 18.299 - 18.394: 99.3051% ( 2) 00:17:46.590 18.394 - 18.489: 99.3207% ( 2) 00:17:46.590 18.489 - 18.584: 99.3285% ( 1) 00:17:46.590 18.868 - 18.963: 99.3363% ( 1) 00:17:46.590 3980.705 - 4004.978: 99.7111% ( 48) 00:17:46.590 4004.978 - 4029.250: 100.0000% ( 37) 00:17:46.590 00:17:46.590 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:46.590 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:46.590 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:46.590 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:46.590 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:46.590 [ 00:17:46.590 { 00:17:46.590 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:46.590 "subtype": "Discovery", 00:17:46.590 "listen_addresses": [], 00:17:46.590 "allow_any_host": true, 00:17:46.590 "hosts": [] 00:17:46.590 }, 00:17:46.590 { 00:17:46.590 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:46.590 "subtype": "NVMe", 00:17:46.590 "listen_addresses": [ 00:17:46.590 { 00:17:46.590 "trtype": "VFIOUSER", 00:17:46.590 "adrfam": "IPv4", 00:17:46.590 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:46.590 "trsvcid": "0" 00:17:46.590 } 00:17:46.590 ], 00:17:46.590 "allow_any_host": true, 00:17:46.590 "hosts": [], 00:17:46.590 "serial_number": "SPDK1", 00:17:46.590 "model_number": "SPDK bdev Controller", 00:17:46.590 "max_namespaces": 32, 00:17:46.590 "min_cntlid": 1, 00:17:46.590 "max_cntlid": 65519, 00:17:46.590 "namespaces": [ 00:17:46.590 { 00:17:46.590 "nsid": 1, 00:17:46.590 "bdev_name": "Malloc1", 00:17:46.590 "name": "Malloc1", 00:17:46.590 "nguid": "2320CD43EE6D4090B717343266390023", 00:17:46.590 "uuid": "2320cd43-ee6d-4090-b717-343266390023" 00:17:46.590 }, 00:17:46.590 { 00:17:46.590 "nsid": 2, 00:17:46.590 "bdev_name": "Malloc3", 00:17:46.590 "name": "Malloc3", 00:17:46.590 "nguid": "F65D33F9274E41428572CE6E4031162C", 00:17:46.590 "uuid": "f65d33f9-274e-4142-8572-ce6e4031162c" 00:17:46.590 } 00:17:46.590 ] 00:17:46.590 }, 00:17:46.590 { 00:17:46.590 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:46.590 "subtype": "NVMe", 00:17:46.590 "listen_addresses": [ 00:17:46.590 { 00:17:46.590 "trtype": "VFIOUSER", 00:17:46.590 "adrfam": "IPv4", 00:17:46.590 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:46.590 "trsvcid": "0" 00:17:46.590 } 00:17:46.590 ], 00:17:46.590 "allow_any_host": true, 00:17:46.590 "hosts": [], 00:17:46.590 "serial_number": "SPDK2", 00:17:46.590 "model_number": "SPDK bdev Controller", 00:17:46.590 "max_namespaces": 32, 00:17:46.590 "min_cntlid": 1, 00:17:46.590 "max_cntlid": 65519, 00:17:46.590 "namespaces": [ 00:17:46.590 { 00:17:46.590 "nsid": 1, 00:17:46.591 "bdev_name": "Malloc2", 00:17:46.591 "name": "Malloc2", 00:17:46.591 "nguid": "3BCAB72220AD4EDCB3D45508C8D04FCB", 00:17:46.591 "uuid": "3bcab722-20ad-4edc-b3d4-5508c8d04fcb" 00:17:46.591 } 00:17:46.591 ] 00:17:46.591 } 00:17:46.591 ] 00:17:46.591 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:46.591 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2222830 00:17:46.591 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:46.591 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:46.591 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:46.591 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:46.591 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:46.591 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:46.591 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:46.591 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:46.849 [2024-12-05 13:49:18.209543] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:46.849 Malloc4 00:17:46.849 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:47.106 [2024-12-05 13:49:18.595479] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:47.106 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:47.363 Asynchronous Event Request test 00:17:47.363 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:47.363 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:47.363 Registering asynchronous event callbacks... 00:17:47.363 Starting namespace attribute notice tests for all controllers... 00:17:47.363 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:47.363 aer_cb - Changed Namespace 00:17:47.363 Cleaning up... 00:17:47.363 [ 00:17:47.363 { 00:17:47.363 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:47.363 "subtype": "Discovery", 00:17:47.363 "listen_addresses": [], 00:17:47.363 "allow_any_host": true, 00:17:47.363 "hosts": [] 00:17:47.363 }, 00:17:47.363 { 00:17:47.363 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:47.363 "subtype": "NVMe", 00:17:47.363 "listen_addresses": [ 00:17:47.363 { 00:17:47.363 "trtype": "VFIOUSER", 00:17:47.363 "adrfam": "IPv4", 00:17:47.363 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:47.363 "trsvcid": "0" 00:17:47.363 } 00:17:47.363 ], 00:17:47.363 "allow_any_host": true, 00:17:47.363 "hosts": [], 00:17:47.363 "serial_number": "SPDK1", 00:17:47.363 "model_number": "SPDK bdev Controller", 00:17:47.363 "max_namespaces": 32, 00:17:47.363 "min_cntlid": 1, 00:17:47.363 "max_cntlid": 65519, 00:17:47.363 "namespaces": [ 00:17:47.363 { 00:17:47.363 "nsid": 1, 00:17:47.363 "bdev_name": "Malloc1", 00:17:47.363 "name": "Malloc1", 00:17:47.363 "nguid": "2320CD43EE6D4090B717343266390023", 00:17:47.363 "uuid": "2320cd43-ee6d-4090-b717-343266390023" 00:17:47.363 }, 00:17:47.363 { 00:17:47.363 "nsid": 2, 00:17:47.363 "bdev_name": "Malloc3", 00:17:47.363 "name": "Malloc3", 00:17:47.363 "nguid": "F65D33F9274E41428572CE6E4031162C", 00:17:47.363 "uuid": "f65d33f9-274e-4142-8572-ce6e4031162c" 00:17:47.363 } 00:17:47.363 ] 00:17:47.363 }, 00:17:47.363 { 00:17:47.363 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:47.363 "subtype": "NVMe", 00:17:47.363 "listen_addresses": [ 00:17:47.363 { 00:17:47.363 "trtype": "VFIOUSER", 00:17:47.363 "adrfam": "IPv4", 00:17:47.363 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:47.363 "trsvcid": "0" 00:17:47.363 } 00:17:47.363 ], 00:17:47.363 "allow_any_host": true, 00:17:47.363 "hosts": [], 00:17:47.363 "serial_number": "SPDK2", 00:17:47.363 "model_number": "SPDK bdev Controller", 00:17:47.363 "max_namespaces": 32, 00:17:47.363 "min_cntlid": 1, 00:17:47.363 "max_cntlid": 65519, 00:17:47.363 "namespaces": [ 00:17:47.363 { 00:17:47.363 "nsid": 1, 00:17:47.363 "bdev_name": "Malloc2", 00:17:47.363 "name": "Malloc2", 00:17:47.363 "nguid": "3BCAB72220AD4EDCB3D45508C8D04FCB", 00:17:47.363 "uuid": "3bcab722-20ad-4edc-b3d4-5508c8d04fcb" 00:17:47.363 }, 00:17:47.363 { 00:17:47.363 "nsid": 2, 00:17:47.363 "bdev_name": "Malloc4", 00:17:47.363 "name": "Malloc4", 00:17:47.363 "nguid": "D7132231703F4196AEF9A560DA84513B", 00:17:47.363 "uuid": "d7132231-703f-4196-aef9-a560da84513b" 00:17:47.363 } 00:17:47.363 ] 00:17:47.363 } 00:17:47.363 ] 00:17:47.619 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2222830 00:17:47.619 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:47.619 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2217216 00:17:47.619 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2217216 ']' 00:17:47.619 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2217216 00:17:47.619 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:47.619 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.619 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2217216 00:17:47.619 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.619 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.619 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2217216' 00:17:47.619 killing process with pid 2217216 00:17:47.619 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2217216 00:17:47.619 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2217216 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2222973 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2222973' 00:17:47.877 Process pid: 2222973 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2222973 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2222973 ']' 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.877 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:47.877 [2024-12-05 13:49:19.287293] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:47.877 [2024-12-05 13:49:19.288288] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:17:47.877 [2024-12-05 13:49:19.288353] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.877 [2024-12-05 13:49:19.353146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.136 [2024-12-05 13:49:19.410702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.136 [2024-12-05 13:49:19.410759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.136 [2024-12-05 13:49:19.410788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.136 [2024-12-05 13:49:19.410799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.136 [2024-12-05 13:49:19.410809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.136 [2024-12-05 13:49:19.412434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.136 [2024-12-05 13:49:19.412505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.136 [2024-12-05 13:49:19.412570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.136 [2024-12-05 13:49:19.412573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.136 [2024-12-05 13:49:19.510849] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:48.136 [2024-12-05 13:49:19.511120] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:48.136 [2024-12-05 13:49:19.511410] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:48.136 [2024-12-05 13:49:19.512071] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:48.136 [2024-12-05 13:49:19.512280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:48.136 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.136 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:48.136 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:49.070 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:49.637 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:49.637 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:49.637 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:49.637 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:49.637 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:49.637 Malloc1 00:17:49.895 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:50.154 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:50.412 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:50.670 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:50.670 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:50.670 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:50.928 Malloc2 00:17:50.928 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:51.185 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:51.442 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:51.700 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:51.700 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2222973 00:17:51.700 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2222973 ']' 00:17:51.700 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2222973 00:17:51.700 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:51.700 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.700 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2222973 00:17:51.959 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.959 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.959 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2222973' 00:17:51.959 killing process with pid 2222973 00:17:51.959 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2222973 00:17:51.959 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2222973 00:17:52.218 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:52.218 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:52.218 00:17:52.218 real 0m53.469s 00:17:52.218 user 3m26.472s 00:17:52.218 sys 0m4.009s 00:17:52.218 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.218 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:52.218 ************************************ 00:17:52.218 END TEST nvmf_vfio_user 00:17:52.218 ************************************ 00:17:52.218 13:49:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:52.218 13:49:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:52.218 13:49:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.218 13:49:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.218 ************************************ 00:17:52.218 START TEST nvmf_vfio_user_nvme_compliance 00:17:52.218 ************************************ 00:17:52.218 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:52.218 * Looking for test storage... 00:17:52.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:52.218 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:52.218 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:17:52.218 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:52.218 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:52.218 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:52.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.219 --rc genhtml_branch_coverage=1 00:17:52.219 --rc genhtml_function_coverage=1 00:17:52.219 --rc genhtml_legend=1 00:17:52.219 --rc geninfo_all_blocks=1 00:17:52.219 --rc geninfo_unexecuted_blocks=1 00:17:52.219 00:17:52.219 ' 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:52.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.219 --rc genhtml_branch_coverage=1 00:17:52.219 --rc genhtml_function_coverage=1 00:17:52.219 --rc genhtml_legend=1 00:17:52.219 --rc geninfo_all_blocks=1 00:17:52.219 --rc geninfo_unexecuted_blocks=1 00:17:52.219 00:17:52.219 ' 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:52.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.219 --rc genhtml_branch_coverage=1 00:17:52.219 --rc genhtml_function_coverage=1 00:17:52.219 --rc genhtml_legend=1 00:17:52.219 --rc geninfo_all_blocks=1 00:17:52.219 --rc geninfo_unexecuted_blocks=1 00:17:52.219 00:17:52.219 ' 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:52.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.219 --rc genhtml_branch_coverage=1 00:17:52.219 --rc genhtml_function_coverage=1 00:17:52.219 --rc genhtml_legend=1 00:17:52.219 --rc geninfo_all_blocks=1 00:17:52.219 --rc geninfo_unexecuted_blocks=1 00:17:52.219 00:17:52.219 ' 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.219 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2223579 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2223579' 00:17:52.220 Process pid: 2223579 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2223579 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2223579 ']' 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.220 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:52.220 [2024-12-05 13:49:23.740121] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:17:52.220 [2024-12-05 13:49:23.740211] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.478 [2024-12-05 13:49:23.804498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:52.478 [2024-12-05 13:49:23.858279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.478 [2024-12-05 13:49:23.858331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.478 [2024-12-05 13:49:23.858359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.478 [2024-12-05 13:49:23.858370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.478 [2024-12-05 13:49:23.858378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.478 [2024-12-05 13:49:23.859730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.478 [2024-12-05 13:49:23.859791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.478 [2024-12-05 13:49:23.859794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.478 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.478 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:17:52.478 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:53.852 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:53.852 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:53.852 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:53.852 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.852 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:53.852 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.852 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:53.852 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:53.852 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.852 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:53.852 malloc0 00:17:53.852 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.852 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:53.852 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.852 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:53.852 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.852 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:53.852 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.852 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:53.852 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.852 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:53.852 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.852 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:53.852 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.852 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:53.852 00:17:53.852 00:17:53.852 CUnit - A unit testing framework for C - Version 2.1-3 00:17:53.852 http://cunit.sourceforge.net/ 00:17:53.852 00:17:53.852 00:17:53.852 Suite: nvme_compliance 00:17:53.852 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-05 13:49:25.209953] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:53.852 [2024-12-05 13:49:25.211389] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:53.852 [2024-12-05 13:49:25.211436] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:53.852 [2024-12-05 13:49:25.211450] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:53.852 [2024-12-05 13:49:25.212970] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:53.852 passed 00:17:53.852 Test: admin_identify_ctrlr_verify_fused ...[2024-12-05 13:49:25.298541] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:53.852 [2024-12-05 13:49:25.301566] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:53.852 passed 00:17:54.109 Test: admin_identify_ns ...[2024-12-05 13:49:25.386160] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.109 [2024-12-05 13:49:25.449434] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:54.109 [2024-12-05 13:49:25.457432] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:54.109 [2024-12-05 13:49:25.478560] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.109 passed 00:17:54.109 Test: admin_get_features_mandatory_features ...[2024-12-05 13:49:25.559292] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.109 [2024-12-05 13:49:25.564319] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.109 passed 00:17:54.366 Test: admin_get_features_optional_features ...[2024-12-05 13:49:25.646898] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.366 [2024-12-05 13:49:25.650934] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.366 passed 00:17:54.366 Test: admin_set_features_number_of_queues ...[2024-12-05 13:49:25.732911] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.366 [2024-12-05 13:49:25.841550] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.366 passed 00:17:54.623 Test: admin_get_log_page_mandatory_logs ...[2024-12-05 13:49:25.924054] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.623 [2024-12-05 13:49:25.927077] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.623 passed 00:17:54.624 Test: admin_get_log_page_with_lpo ...[2024-12-05 13:49:26.009935] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.624 [2024-12-05 13:49:26.077433] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:54.624 [2024-12-05 13:49:26.090507] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.624 passed 00:17:54.881 Test: fabric_property_get ...[2024-12-05 13:49:26.174009] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.881 [2024-12-05 13:49:26.175276] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:54.881 [2024-12-05 13:49:26.177030] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.881 passed 00:17:54.881 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-05 13:49:26.259572] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.881 [2024-12-05 13:49:26.260883] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:54.881 [2024-12-05 13:49:26.262601] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.881 passed 00:17:54.881 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-05 13:49:26.348740] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.139 [2024-12-05 13:49:26.432427] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:55.139 [2024-12-05 13:49:26.448431] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:55.139 [2024-12-05 13:49:26.453518] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.139 passed 00:17:55.139 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-05 13:49:26.536046] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.139 [2024-12-05 13:49:26.537371] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:55.139 [2024-12-05 13:49:26.539069] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.139 passed 00:17:55.139 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-05 13:49:26.621179] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.396 [2024-12-05 13:49:26.697448] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:55.396 [2024-12-05 13:49:26.721431] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:55.396 [2024-12-05 13:49:26.726537] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.396 passed 00:17:55.396 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-05 13:49:26.810058] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.396 [2024-12-05 13:49:26.811376] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:55.396 [2024-12-05 13:49:26.811435] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:55.396 [2024-12-05 13:49:26.813078] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.396 passed 00:17:55.396 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-05 13:49:26.894188] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.653 [2024-12-05 13:49:26.986431] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:55.653 [2024-12-05 13:49:26.994439] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:55.653 [2024-12-05 13:49:27.002444] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:55.653 [2024-12-05 13:49:27.010425] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:55.654 [2024-12-05 13:49:27.039542] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.654 passed 00:17:55.654 Test: admin_create_io_sq_verify_pc ...[2024-12-05 13:49:27.123064] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.654 [2024-12-05 13:49:27.139440] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:55.654 [2024-12-05 13:49:27.157498] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.911 passed 00:17:55.911 Test: admin_create_io_qp_max_qps ...[2024-12-05 13:49:27.243046] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.284 [2024-12-05 13:49:28.369434] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:57.284 [2024-12-05 13:49:28.767495] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.284 passed 00:17:57.542 Test: admin_create_io_sq_shared_cq ...[2024-12-05 13:49:28.851779] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.542 [2024-12-05 13:49:28.983426] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:57.542 [2024-12-05 13:49:29.020519] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.542 passed 00:17:57.542 00:17:57.542 Run Summary: Type Total Ran Passed Failed Inactive 00:17:57.542 suites 1 1 n/a 0 0 00:17:57.542 tests 18 18 18 0 0 00:17:57.542 asserts 360 360 360 0 n/a 00:17:57.542 00:17:57.542 Elapsed time = 1.579 seconds 00:17:57.800 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2223579 00:17:57.800 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2223579 ']' 00:17:57.800 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2223579 00:17:57.800 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:17:57.800 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.800 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2223579 00:17:57.800 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.800 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.800 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2223579' 00:17:57.800 killing process with pid 2223579 00:17:57.800 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2223579 00:17:57.800 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2223579 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:58.060 00:17:58.060 real 0m5.787s 00:17:58.060 user 0m16.269s 00:17:58.060 sys 0m0.555s 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:58.060 ************************************ 00:17:58.060 END TEST nvmf_vfio_user_nvme_compliance 00:17:58.060 ************************************ 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:58.060 ************************************ 00:17:58.060 START TEST nvmf_vfio_user_fuzz 00:17:58.060 ************************************ 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:58.060 * Looking for test storage... 00:17:58.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.060 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:58.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.061 --rc genhtml_branch_coverage=1 00:17:58.061 --rc genhtml_function_coverage=1 00:17:58.061 --rc genhtml_legend=1 00:17:58.061 --rc geninfo_all_blocks=1 00:17:58.061 --rc geninfo_unexecuted_blocks=1 00:17:58.061 00:17:58.061 ' 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:58.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.061 --rc genhtml_branch_coverage=1 00:17:58.061 --rc genhtml_function_coverage=1 00:17:58.061 --rc genhtml_legend=1 00:17:58.061 --rc geninfo_all_blocks=1 00:17:58.061 --rc geninfo_unexecuted_blocks=1 00:17:58.061 00:17:58.061 ' 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:58.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.061 --rc genhtml_branch_coverage=1 00:17:58.061 --rc genhtml_function_coverage=1 00:17:58.061 --rc genhtml_legend=1 00:17:58.061 --rc geninfo_all_blocks=1 00:17:58.061 --rc geninfo_unexecuted_blocks=1 00:17:58.061 00:17:58.061 ' 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:58.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.061 --rc genhtml_branch_coverage=1 00:17:58.061 --rc genhtml_function_coverage=1 00:17:58.061 --rc genhtml_legend=1 00:17:58.061 --rc geninfo_all_blocks=1 00:17:58.061 --rc geninfo_unexecuted_blocks=1 00:17:58.061 00:17:58.061 ' 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:58.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:58.061 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:58.062 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:58.062 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:58.062 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:58.062 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2224327 00:17:58.062 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:58.062 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2224327' 00:17:58.062 Process pid: 2224327 00:17:58.062 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:58.062 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2224327 00:17:58.062 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2224327 ']' 00:17:58.062 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.062 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.062 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.062 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.062 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:58.320 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.320 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:17:58.320 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:59.327 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:59.327 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.327 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:59.327 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.327 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:59.327 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:59.327 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.327 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:59.586 malloc0 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:59.586 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:31.646 Fuzzing completed. Shutting down the fuzz application 00:18:31.646 00:18:31.646 Dumping successful admin opcodes: 00:18:31.646 9, 10, 00:18:31.646 Dumping successful io opcodes: 00:18:31.646 0, 00:18:31.646 NS: 0x20000081ef00 I/O qp, Total commands completed: 713358, total successful commands: 2780, random_seed: 2694977984 00:18:31.646 NS: 0x20000081ef00 admin qp, Total commands completed: 126352, total successful commands: 29, random_seed: 4079455936 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2224327 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2224327 ']' 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2224327 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2224327 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2224327' 00:18:31.646 killing process with pid 2224327 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2224327 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2224327 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:31.646 00:18:31.646 real 0m32.220s 00:18:31.646 user 0m33.602s 00:18:31.646 sys 0m26.588s 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:31.646 ************************************ 00:18:31.646 END TEST nvmf_vfio_user_fuzz 00:18:31.646 ************************************ 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.646 13:50:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:31.646 ************************************ 00:18:31.646 START TEST nvmf_auth_target 00:18:31.646 ************************************ 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:31.647 * Looking for test storage... 00:18:31.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:31.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.647 --rc genhtml_branch_coverage=1 00:18:31.647 --rc genhtml_function_coverage=1 00:18:31.647 --rc genhtml_legend=1 00:18:31.647 --rc geninfo_all_blocks=1 00:18:31.647 --rc geninfo_unexecuted_blocks=1 00:18:31.647 00:18:31.647 ' 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:31.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.647 --rc genhtml_branch_coverage=1 00:18:31.647 --rc genhtml_function_coverage=1 00:18:31.647 --rc genhtml_legend=1 00:18:31.647 --rc geninfo_all_blocks=1 00:18:31.647 --rc geninfo_unexecuted_blocks=1 00:18:31.647 00:18:31.647 ' 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:31.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.647 --rc genhtml_branch_coverage=1 00:18:31.647 --rc genhtml_function_coverage=1 00:18:31.647 --rc genhtml_legend=1 00:18:31.647 --rc geninfo_all_blocks=1 00:18:31.647 --rc geninfo_unexecuted_blocks=1 00:18:31.647 00:18:31.647 ' 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:31.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.647 --rc genhtml_branch_coverage=1 00:18:31.647 --rc genhtml_function_coverage=1 00:18:31.647 --rc genhtml_legend=1 00:18:31.647 --rc geninfo_all_blocks=1 00:18:31.647 --rc geninfo_unexecuted_blocks=1 00:18:31.647 00:18:31.647 ' 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:31.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:31.647 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:31.648 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:32.586 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:32.586 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:32.586 Found net devices under 0000:09:00.0: cvl_0_0 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:32.586 Found net devices under 0000:09:00.1: cvl_0_1 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:32.586 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.587 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.587 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:32.587 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:32.587 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.587 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:32.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:18:32.845 00:18:32.845 --- 10.0.0.2 ping statistics --- 00:18:32.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.845 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:18:32.845 00:18:32.845 --- 10.0.0.1 ping statistics --- 00:18:32.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.845 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2229804 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:32.845 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2229804 00:18:32.846 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2229804 ']' 00:18:32.846 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.846 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.846 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.846 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.846 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.104 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.104 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:33.104 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2229829 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=aa6cdb3d3fd70278273d14c3c2394bf792290eef4fe62537 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Yxu 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key aa6cdb3d3fd70278273d14c3c2394bf792290eef4fe62537 0 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 aa6cdb3d3fd70278273d14c3c2394bf792290eef4fe62537 0 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=aa6cdb3d3fd70278273d14c3c2394bf792290eef4fe62537 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Yxu 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Yxu 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Yxu 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a267d1d52e13d38b9eee30e59dcd886fee42a86aa1298d40b2641bfaa840a578 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vv2 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a267d1d52e13d38b9eee30e59dcd886fee42a86aa1298d40b2641bfaa840a578 3 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a267d1d52e13d38b9eee30e59dcd886fee42a86aa1298d40b2641bfaa840a578 3 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a267d1d52e13d38b9eee30e59dcd886fee42a86aa1298d40b2641bfaa840a578 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:33.105 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vv2 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vv2 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.vv2 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f713c9b071af53c0fd5fde1f81dfb4df 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.djv 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f713c9b071af53c0fd5fde1f81dfb4df 1 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f713c9b071af53c0fd5fde1f81dfb4df 1 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f713c9b071af53c0fd5fde1f81dfb4df 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.djv 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.djv 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.djv 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:33.364 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a76cb6f4c2958aa5571c9d5c5204b65e6e80716ba948de97 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.mOn 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a76cb6f4c2958aa5571c9d5c5204b65e6e80716ba948de97 2 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a76cb6f4c2958aa5571c9d5c5204b65e6e80716ba948de97 2 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a76cb6f4c2958aa5571c9d5c5204b65e6e80716ba948de97 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.mOn 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.mOn 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.mOn 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3d9bec63a4e97a8103e69e1683008ed0d3b70a29e816f5bb 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.JEn 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3d9bec63a4e97a8103e69e1683008ed0d3b70a29e816f5bb 2 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3d9bec63a4e97a8103e69e1683008ed0d3b70a29e816f5bb 2 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3d9bec63a4e97a8103e69e1683008ed0d3b70a29e816f5bb 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.JEn 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.JEn 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.JEn 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2ea756982e5c6132954bd820d038d7a3 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.IHl 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2ea756982e5c6132954bd820d038d7a3 1 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2ea756982e5c6132954bd820d038d7a3 1 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2ea756982e5c6132954bd820d038d7a3 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.IHl 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.IHl 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.IHl 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8470673d58f0b3eefdb197a74cb93dd971de77eba97cf8b8626ddf87409e1549 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.FXa 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8470673d58f0b3eefdb197a74cb93dd971de77eba97cf8b8626ddf87409e1549 3 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8470673d58f0b3eefdb197a74cb93dd971de77eba97cf8b8626ddf87409e1549 3 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8470673d58f0b3eefdb197a74cb93dd971de77eba97cf8b8626ddf87409e1549 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.FXa 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.FXa 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.FXa 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2229804 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2229804 ']' 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.365 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2229829 /var/tmp/host.sock 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2229829 ']' 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:33.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Yxu 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.931 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.188 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.188 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Yxu 00:18:34.188 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Yxu 00:18:34.445 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.vv2 ]] 00:18:34.445 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vv2 00:18:34.445 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.445 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.445 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.445 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vv2 00:18:34.445 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vv2 00:18:34.702 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:34.702 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.djv 00:18:34.702 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.702 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.702 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.702 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.djv 00:18:34.702 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.djv 00:18:34.959 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.mOn ]] 00:18:34.959 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mOn 00:18:34.959 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.959 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.959 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.959 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mOn 00:18:34.959 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mOn 00:18:35.214 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:35.214 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.JEn 00:18:35.214 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.214 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.214 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.214 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.JEn 00:18:35.214 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.JEn 00:18:35.471 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.IHl ]] 00:18:35.471 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IHl 00:18:35.471 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.471 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.471 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.471 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IHl 00:18:35.471 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IHl 00:18:35.727 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:35.727 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.FXa 00:18:35.727 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.727 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.727 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.727 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.FXa 00:18:35.727 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.FXa 00:18:35.984 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:35.984 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:35.984 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.984 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.984 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:35.984 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:36.241 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:36.241 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.241 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:36.241 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:36.241 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:36.241 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.241 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.241 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.241 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.241 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.241 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.241 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.241 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.497 00:18:36.497 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.497 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.497 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.754 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.754 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.754 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.114 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.114 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.114 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.114 { 00:18:37.114 "cntlid": 1, 00:18:37.114 "qid": 0, 00:18:37.114 "state": "enabled", 00:18:37.114 "thread": "nvmf_tgt_poll_group_000", 00:18:37.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:37.114 "listen_address": { 00:18:37.114 "trtype": "TCP", 00:18:37.114 "adrfam": "IPv4", 00:18:37.114 "traddr": "10.0.0.2", 00:18:37.114 "trsvcid": "4420" 00:18:37.114 }, 00:18:37.114 "peer_address": { 00:18:37.114 "trtype": "TCP", 00:18:37.114 "adrfam": "IPv4", 00:18:37.114 "traddr": "10.0.0.1", 00:18:37.114 "trsvcid": "46494" 00:18:37.114 }, 00:18:37.114 "auth": { 00:18:37.114 "state": "completed", 00:18:37.114 "digest": "sha256", 00:18:37.114 "dhgroup": "null" 00:18:37.114 } 00:18:37.114 } 00:18:37.114 ]' 00:18:37.114 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.114 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.114 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.114 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:37.114 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.114 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.114 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.114 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.392 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:18:37.392 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:18:38.323 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.323 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:38.323 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.323 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.323 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.323 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.323 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:38.323 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:38.323 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:38.323 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.323 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:38.323 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:38.323 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:38.581 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.581 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.581 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.581 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.581 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.581 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.581 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.581 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.838 00:18:38.838 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.838 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.838 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.094 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.094 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.094 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.094 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.094 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.094 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.094 { 00:18:39.094 "cntlid": 3, 00:18:39.094 "qid": 0, 00:18:39.094 "state": "enabled", 00:18:39.094 "thread": "nvmf_tgt_poll_group_000", 00:18:39.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:39.094 "listen_address": { 00:18:39.094 "trtype": "TCP", 00:18:39.094 "adrfam": "IPv4", 00:18:39.094 "traddr": "10.0.0.2", 00:18:39.094 "trsvcid": "4420" 00:18:39.094 }, 00:18:39.094 "peer_address": { 00:18:39.094 "trtype": "TCP", 00:18:39.094 "adrfam": "IPv4", 00:18:39.094 "traddr": "10.0.0.1", 00:18:39.094 "trsvcid": "46512" 00:18:39.094 }, 00:18:39.094 "auth": { 00:18:39.094 "state": "completed", 00:18:39.094 "digest": "sha256", 00:18:39.094 "dhgroup": "null" 00:18:39.094 } 00:18:39.094 } 00:18:39.094 ]' 00:18:39.094 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.094 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.094 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.094 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:39.094 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.094 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.094 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.094 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.655 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:18:39.655 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:18:40.586 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.586 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:40.586 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.586 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.586 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.586 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.586 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.586 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.586 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:40.586 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.586 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:40.586 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:40.586 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:40.586 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.586 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.586 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.586 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.586 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.586 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.586 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.586 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.153 00:18:41.153 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.153 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.153 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.411 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.411 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.411 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.411 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.411 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.411 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.411 { 00:18:41.411 "cntlid": 5, 00:18:41.411 "qid": 0, 00:18:41.411 "state": "enabled", 00:18:41.411 "thread": "nvmf_tgt_poll_group_000", 00:18:41.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:41.411 "listen_address": { 00:18:41.411 "trtype": "TCP", 00:18:41.411 "adrfam": "IPv4", 00:18:41.411 "traddr": "10.0.0.2", 00:18:41.411 "trsvcid": "4420" 00:18:41.411 }, 00:18:41.411 "peer_address": { 00:18:41.411 "trtype": "TCP", 00:18:41.411 "adrfam": "IPv4", 00:18:41.411 "traddr": "10.0.0.1", 00:18:41.411 "trsvcid": "46524" 00:18:41.411 }, 00:18:41.411 "auth": { 00:18:41.411 "state": "completed", 00:18:41.411 "digest": "sha256", 00:18:41.411 "dhgroup": "null" 00:18:41.411 } 00:18:41.411 } 00:18:41.411 ]' 00:18:41.411 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.411 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.411 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.411 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:41.411 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.411 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.411 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.411 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.668 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:18:41.668 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:18:42.600 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.600 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:42.600 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.600 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.600 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.600 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.600 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.600 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.858 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:42.858 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.858 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:42.858 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:42.858 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:42.858 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.858 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:42.858 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.858 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.858 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.858 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:42.858 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.858 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.116 00:18:43.373 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.373 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.373 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.631 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.631 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.631 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.631 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.631 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.631 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.631 { 00:18:43.631 "cntlid": 7, 00:18:43.631 "qid": 0, 00:18:43.631 "state": "enabled", 00:18:43.631 "thread": "nvmf_tgt_poll_group_000", 00:18:43.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:43.631 "listen_address": { 00:18:43.631 "trtype": "TCP", 00:18:43.631 "adrfam": "IPv4", 00:18:43.631 "traddr": "10.0.0.2", 00:18:43.631 "trsvcid": "4420" 00:18:43.631 }, 00:18:43.631 "peer_address": { 00:18:43.631 "trtype": "TCP", 00:18:43.631 "adrfam": "IPv4", 00:18:43.631 "traddr": "10.0.0.1", 00:18:43.631 "trsvcid": "46544" 00:18:43.631 }, 00:18:43.631 "auth": { 00:18:43.631 "state": "completed", 00:18:43.631 "digest": "sha256", 00:18:43.631 "dhgroup": "null" 00:18:43.631 } 00:18:43.631 } 00:18:43.631 ]' 00:18:43.631 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.631 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.631 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.631 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:43.631 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.632 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.632 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.632 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.890 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:18:43.890 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:18:44.824 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.824 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:44.824 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.824 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.824 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.824 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.824 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.824 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:44.824 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.082 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:45.082 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.082 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:45.082 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:45.082 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:45.082 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.082 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.082 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.082 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.082 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.082 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.082 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.082 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.340 00:18:45.597 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.597 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.597 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.855 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.855 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.855 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.855 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.855 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.855 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.855 { 00:18:45.855 "cntlid": 9, 00:18:45.855 "qid": 0, 00:18:45.855 "state": "enabled", 00:18:45.855 "thread": "nvmf_tgt_poll_group_000", 00:18:45.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:45.855 "listen_address": { 00:18:45.855 "trtype": "TCP", 00:18:45.855 "adrfam": "IPv4", 00:18:45.855 "traddr": "10.0.0.2", 00:18:45.855 "trsvcid": "4420" 00:18:45.855 }, 00:18:45.855 "peer_address": { 00:18:45.855 "trtype": "TCP", 00:18:45.855 "adrfam": "IPv4", 00:18:45.855 "traddr": "10.0.0.1", 00:18:45.855 "trsvcid": "39954" 00:18:45.855 }, 00:18:45.855 "auth": { 00:18:45.855 "state": "completed", 00:18:45.855 "digest": "sha256", 00:18:45.855 "dhgroup": "ffdhe2048" 00:18:45.855 } 00:18:45.855 } 00:18:45.855 ]' 00:18:45.855 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.855 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.855 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.855 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:45.855 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.855 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.855 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.855 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.112 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:18:46.112 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:18:47.046 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.046 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:47.046 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.046 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.046 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.046 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.046 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.046 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.304 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:47.304 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.304 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.304 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:47.304 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:47.304 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.304 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.304 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.304 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.304 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.304 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.304 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.304 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.561 00:18:47.818 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.818 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.818 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.076 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.076 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.076 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.076 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.076 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.076 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.076 { 00:18:48.076 "cntlid": 11, 00:18:48.076 "qid": 0, 00:18:48.076 "state": "enabled", 00:18:48.076 "thread": "nvmf_tgt_poll_group_000", 00:18:48.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:48.076 "listen_address": { 00:18:48.076 "trtype": "TCP", 00:18:48.076 "adrfam": "IPv4", 00:18:48.076 "traddr": "10.0.0.2", 00:18:48.076 "trsvcid": "4420" 00:18:48.076 }, 00:18:48.076 "peer_address": { 00:18:48.076 "trtype": "TCP", 00:18:48.076 "adrfam": "IPv4", 00:18:48.076 "traddr": "10.0.0.1", 00:18:48.076 "trsvcid": "39988" 00:18:48.076 }, 00:18:48.076 "auth": { 00:18:48.076 "state": "completed", 00:18:48.076 "digest": "sha256", 00:18:48.076 "dhgroup": "ffdhe2048" 00:18:48.076 } 00:18:48.076 } 00:18:48.076 ]' 00:18:48.076 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.076 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.076 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.076 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:48.076 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.076 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.076 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.076 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.334 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:18:48.334 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:18:49.269 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.269 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:49.269 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.269 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.269 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.269 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.269 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:49.269 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:49.526 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:49.526 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.526 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:49.526 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:49.526 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:49.526 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.526 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.526 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.526 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.526 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.526 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.526 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.526 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.783 00:18:49.783 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.783 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.783 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.348 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.348 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.348 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.348 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.348 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.348 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.348 { 00:18:50.348 "cntlid": 13, 00:18:50.348 "qid": 0, 00:18:50.348 "state": "enabled", 00:18:50.348 "thread": "nvmf_tgt_poll_group_000", 00:18:50.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:50.348 "listen_address": { 00:18:50.348 "trtype": "TCP", 00:18:50.348 "adrfam": "IPv4", 00:18:50.348 "traddr": "10.0.0.2", 00:18:50.348 "trsvcid": "4420" 00:18:50.348 }, 00:18:50.348 "peer_address": { 00:18:50.348 "trtype": "TCP", 00:18:50.348 "adrfam": "IPv4", 00:18:50.348 "traddr": "10.0.0.1", 00:18:50.348 "trsvcid": "40014" 00:18:50.348 }, 00:18:50.348 "auth": { 00:18:50.348 "state": "completed", 00:18:50.348 "digest": "sha256", 00:18:50.348 "dhgroup": "ffdhe2048" 00:18:50.348 } 00:18:50.348 } 00:18:50.348 ]' 00:18:50.348 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.348 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.348 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.348 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.348 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.348 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.348 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.348 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.605 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:18:50.605 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:18:51.538 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.538 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:51.538 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.538 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.538 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.538 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.538 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.538 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.796 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:51.796 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.796 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:51.796 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:51.796 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:51.796 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.796 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:51.796 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.796 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.796 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.796 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:51.796 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.796 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.362 00:18:52.362 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.362 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.362 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.620 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.620 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.620 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.620 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.620 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.620 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.620 { 00:18:52.620 "cntlid": 15, 00:18:52.620 "qid": 0, 00:18:52.620 "state": "enabled", 00:18:52.620 "thread": "nvmf_tgt_poll_group_000", 00:18:52.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:52.620 "listen_address": { 00:18:52.620 "trtype": "TCP", 00:18:52.620 "adrfam": "IPv4", 00:18:52.620 "traddr": "10.0.0.2", 00:18:52.620 "trsvcid": "4420" 00:18:52.620 }, 00:18:52.620 "peer_address": { 00:18:52.620 "trtype": "TCP", 00:18:52.620 "adrfam": "IPv4", 00:18:52.620 "traddr": "10.0.0.1", 00:18:52.620 "trsvcid": "40048" 00:18:52.620 }, 00:18:52.620 "auth": { 00:18:52.620 "state": "completed", 00:18:52.620 "digest": "sha256", 00:18:52.620 "dhgroup": "ffdhe2048" 00:18:52.620 } 00:18:52.620 } 00:18:52.620 ]' 00:18:52.620 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.620 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.620 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.620 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.620 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.620 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.620 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.620 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.879 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:18:52.879 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:18:53.837 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.837 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:53.837 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.837 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.837 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.837 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.837 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.837 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.837 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:54.095 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:54.095 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.095 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.095 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:54.095 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:54.095 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.095 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.095 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.095 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.095 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.095 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.095 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.095 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.353 00:18:54.611 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.611 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.611 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.870 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.870 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.870 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.870 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.870 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.870 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.870 { 00:18:54.870 "cntlid": 17, 00:18:54.870 "qid": 0, 00:18:54.870 "state": "enabled", 00:18:54.870 "thread": "nvmf_tgt_poll_group_000", 00:18:54.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:54.870 "listen_address": { 00:18:54.870 "trtype": "TCP", 00:18:54.870 "adrfam": "IPv4", 00:18:54.870 "traddr": "10.0.0.2", 00:18:54.870 "trsvcid": "4420" 00:18:54.870 }, 00:18:54.870 "peer_address": { 00:18:54.870 "trtype": "TCP", 00:18:54.870 "adrfam": "IPv4", 00:18:54.870 "traddr": "10.0.0.1", 00:18:54.870 "trsvcid": "40072" 00:18:54.870 }, 00:18:54.870 "auth": { 00:18:54.870 "state": "completed", 00:18:54.870 "digest": "sha256", 00:18:54.870 "dhgroup": "ffdhe3072" 00:18:54.870 } 00:18:54.870 } 00:18:54.870 ]' 00:18:54.870 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.870 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.870 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.870 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:54.870 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.870 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.870 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.870 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.129 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:18:55.129 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:18:56.064 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.064 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:56.064 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.064 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.064 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.064 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.064 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.064 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.322 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:56.322 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.322 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:56.322 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:56.322 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:56.322 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.322 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.322 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.322 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.322 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.322 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.322 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.322 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.888 00:18:56.888 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.888 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.888 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.146 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.146 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.146 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.146 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.146 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.146 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.146 { 00:18:57.146 "cntlid": 19, 00:18:57.146 "qid": 0, 00:18:57.146 "state": "enabled", 00:18:57.146 "thread": "nvmf_tgt_poll_group_000", 00:18:57.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:57.146 "listen_address": { 00:18:57.146 "trtype": "TCP", 00:18:57.146 "adrfam": "IPv4", 00:18:57.146 "traddr": "10.0.0.2", 00:18:57.146 "trsvcid": "4420" 00:18:57.146 }, 00:18:57.146 "peer_address": { 00:18:57.146 "trtype": "TCP", 00:18:57.146 "adrfam": "IPv4", 00:18:57.146 "traddr": "10.0.0.1", 00:18:57.146 "trsvcid": "43224" 00:18:57.146 }, 00:18:57.146 "auth": { 00:18:57.146 "state": "completed", 00:18:57.146 "digest": "sha256", 00:18:57.146 "dhgroup": "ffdhe3072" 00:18:57.146 } 00:18:57.147 } 00:18:57.147 ]' 00:18:57.147 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.147 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.147 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.147 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:57.147 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.147 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.147 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.147 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.405 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:18:57.405 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:18:58.341 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.341 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:58.341 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.341 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.341 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.341 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.341 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:58.341 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:58.600 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:58.600 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.600 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:58.600 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:58.600 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:58.600 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.600 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.600 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.600 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.600 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.600 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.600 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.600 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.166 00:18:59.166 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.166 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.166 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.166 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.166 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.166 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.166 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.424 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.424 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.424 { 00:18:59.424 "cntlid": 21, 00:18:59.424 "qid": 0, 00:18:59.424 "state": "enabled", 00:18:59.424 "thread": "nvmf_tgt_poll_group_000", 00:18:59.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:59.424 "listen_address": { 00:18:59.424 "trtype": "TCP", 00:18:59.424 "adrfam": "IPv4", 00:18:59.424 "traddr": "10.0.0.2", 00:18:59.424 "trsvcid": "4420" 00:18:59.424 }, 00:18:59.424 "peer_address": { 00:18:59.424 "trtype": "TCP", 00:18:59.424 "adrfam": "IPv4", 00:18:59.424 "traddr": "10.0.0.1", 00:18:59.424 "trsvcid": "43254" 00:18:59.424 }, 00:18:59.424 "auth": { 00:18:59.424 "state": "completed", 00:18:59.424 "digest": "sha256", 00:18:59.424 "dhgroup": "ffdhe3072" 00:18:59.424 } 00:18:59.424 } 00:18:59.424 ]' 00:18:59.424 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.424 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.424 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.424 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:59.424 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.424 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.424 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.424 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.682 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:18:59.682 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:19:00.618 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.618 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:00.618 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.618 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.618 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.618 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.618 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.618 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.877 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:00.877 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.877 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:00.877 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:00.877 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.877 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.877 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:00.877 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.877 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.877 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.877 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.877 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.877 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.442 00:19:01.443 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.443 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.443 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.701 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.701 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.701 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.701 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.701 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.701 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.701 { 00:19:01.701 "cntlid": 23, 00:19:01.701 "qid": 0, 00:19:01.701 "state": "enabled", 00:19:01.701 "thread": "nvmf_tgt_poll_group_000", 00:19:01.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:01.701 "listen_address": { 00:19:01.701 "trtype": "TCP", 00:19:01.701 "adrfam": "IPv4", 00:19:01.701 "traddr": "10.0.0.2", 00:19:01.701 "trsvcid": "4420" 00:19:01.701 }, 00:19:01.701 "peer_address": { 00:19:01.701 "trtype": "TCP", 00:19:01.701 "adrfam": "IPv4", 00:19:01.701 "traddr": "10.0.0.1", 00:19:01.701 "trsvcid": "43298" 00:19:01.701 }, 00:19:01.701 "auth": { 00:19:01.701 "state": "completed", 00:19:01.701 "digest": "sha256", 00:19:01.701 "dhgroup": "ffdhe3072" 00:19:01.701 } 00:19:01.701 } 00:19:01.701 ]' 00:19:01.701 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.701 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.702 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.702 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.702 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.702 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.702 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.702 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.960 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:19:01.960 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:19:02.894 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.894 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:02.894 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.894 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.894 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.894 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.894 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.894 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:02.894 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:03.460 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:03.460 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.460 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.460 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:03.460 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:03.460 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.460 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.460 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.460 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.460 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.460 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.460 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.460 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.719 00:19:03.719 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.719 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.719 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.977 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.977 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.977 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.977 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.977 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.977 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.977 { 00:19:03.977 "cntlid": 25, 00:19:03.977 "qid": 0, 00:19:03.977 "state": "enabled", 00:19:03.977 "thread": "nvmf_tgt_poll_group_000", 00:19:03.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:03.977 "listen_address": { 00:19:03.977 "trtype": "TCP", 00:19:03.977 "adrfam": "IPv4", 00:19:03.977 "traddr": "10.0.0.2", 00:19:03.977 "trsvcid": "4420" 00:19:03.977 }, 00:19:03.977 "peer_address": { 00:19:03.977 "trtype": "TCP", 00:19:03.977 "adrfam": "IPv4", 00:19:03.977 "traddr": "10.0.0.1", 00:19:03.977 "trsvcid": "43328" 00:19:03.977 }, 00:19:03.977 "auth": { 00:19:03.977 "state": "completed", 00:19:03.977 "digest": "sha256", 00:19:03.977 "dhgroup": "ffdhe4096" 00:19:03.977 } 00:19:03.977 } 00:19:03.977 ]' 00:19:03.977 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.234 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.234 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.234 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:04.234 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.234 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.234 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.234 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.492 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:19:04.492 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:19:05.426 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.426 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:05.426 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.426 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.426 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.426 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.426 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:05.426 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:05.684 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:05.684 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.684 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:05.684 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:05.684 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:05.684 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.684 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.684 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.684 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.684 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.684 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.684 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.684 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.943 00:19:06.202 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.202 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.202 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.460 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.460 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.460 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.460 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.460 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.460 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.460 { 00:19:06.460 "cntlid": 27, 00:19:06.460 "qid": 0, 00:19:06.460 "state": "enabled", 00:19:06.460 "thread": "nvmf_tgt_poll_group_000", 00:19:06.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:06.460 "listen_address": { 00:19:06.460 "trtype": "TCP", 00:19:06.460 "adrfam": "IPv4", 00:19:06.460 "traddr": "10.0.0.2", 00:19:06.460 "trsvcid": "4420" 00:19:06.460 }, 00:19:06.460 "peer_address": { 00:19:06.460 "trtype": "TCP", 00:19:06.460 "adrfam": "IPv4", 00:19:06.460 "traddr": "10.0.0.1", 00:19:06.460 "trsvcid": "47818" 00:19:06.460 }, 00:19:06.460 "auth": { 00:19:06.460 "state": "completed", 00:19:06.460 "digest": "sha256", 00:19:06.460 "dhgroup": "ffdhe4096" 00:19:06.460 } 00:19:06.460 } 00:19:06.460 ]' 00:19:06.460 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.460 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.460 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.460 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.460 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.460 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.460 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.460 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.718 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:19:06.718 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:19:07.649 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.649 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:07.649 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.649 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.649 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.649 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.649 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.649 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.907 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:07.907 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.907 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.907 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:07.907 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:07.907 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.907 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.907 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.907 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.907 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.907 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.907 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.907 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.472 00:19:08.472 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.472 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.472 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.729 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.729 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.729 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.729 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.729 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.729 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.729 { 00:19:08.729 "cntlid": 29, 00:19:08.729 "qid": 0, 00:19:08.729 "state": "enabled", 00:19:08.729 "thread": "nvmf_tgt_poll_group_000", 00:19:08.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:08.729 "listen_address": { 00:19:08.729 "trtype": "TCP", 00:19:08.729 "adrfam": "IPv4", 00:19:08.729 "traddr": "10.0.0.2", 00:19:08.729 "trsvcid": "4420" 00:19:08.729 }, 00:19:08.729 "peer_address": { 00:19:08.729 "trtype": "TCP", 00:19:08.729 "adrfam": "IPv4", 00:19:08.729 "traddr": "10.0.0.1", 00:19:08.729 "trsvcid": "47846" 00:19:08.729 }, 00:19:08.729 "auth": { 00:19:08.729 "state": "completed", 00:19:08.730 "digest": "sha256", 00:19:08.730 "dhgroup": "ffdhe4096" 00:19:08.730 } 00:19:08.730 } 00:19:08.730 ]' 00:19:08.730 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.730 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.730 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.730 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.730 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.730 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.730 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.730 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.987 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:19:08.987 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:19:09.920 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.920 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:09.920 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.920 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.920 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.920 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.920 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.920 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:10.178 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:10.178 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.178 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:10.178 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:10.178 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:10.178 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.178 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:10.178 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.178 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.178 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.178 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:10.178 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.178 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.742 00:19:10.742 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.742 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.742 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.000 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.000 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.000 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.000 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.000 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.000 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.000 { 00:19:11.000 "cntlid": 31, 00:19:11.000 "qid": 0, 00:19:11.000 "state": "enabled", 00:19:11.000 "thread": "nvmf_tgt_poll_group_000", 00:19:11.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:11.000 "listen_address": { 00:19:11.000 "trtype": "TCP", 00:19:11.000 "adrfam": "IPv4", 00:19:11.000 "traddr": "10.0.0.2", 00:19:11.000 "trsvcid": "4420" 00:19:11.000 }, 00:19:11.000 "peer_address": { 00:19:11.000 "trtype": "TCP", 00:19:11.000 "adrfam": "IPv4", 00:19:11.000 "traddr": "10.0.0.1", 00:19:11.000 "trsvcid": "47864" 00:19:11.000 }, 00:19:11.000 "auth": { 00:19:11.000 "state": "completed", 00:19:11.000 "digest": "sha256", 00:19:11.000 "dhgroup": "ffdhe4096" 00:19:11.000 } 00:19:11.000 } 00:19:11.000 ]' 00:19:11.000 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.000 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.000 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.000 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:11.000 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.000 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.000 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.000 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.257 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:19:11.257 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:19:12.191 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.191 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:12.191 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.191 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.191 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.191 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.191 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.191 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:12.191 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:12.793 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:12.793 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.793 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.793 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:12.793 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:12.793 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.793 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.793 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.793 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.793 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.793 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.793 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.793 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.050 00:19:13.308 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.308 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.308 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.566 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.566 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.566 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.566 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.566 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.566 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.566 { 00:19:13.566 "cntlid": 33, 00:19:13.566 "qid": 0, 00:19:13.566 "state": "enabled", 00:19:13.566 "thread": "nvmf_tgt_poll_group_000", 00:19:13.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:13.566 "listen_address": { 00:19:13.566 "trtype": "TCP", 00:19:13.566 "adrfam": "IPv4", 00:19:13.566 "traddr": "10.0.0.2", 00:19:13.566 "trsvcid": "4420" 00:19:13.566 }, 00:19:13.566 "peer_address": { 00:19:13.566 "trtype": "TCP", 00:19:13.566 "adrfam": "IPv4", 00:19:13.566 "traddr": "10.0.0.1", 00:19:13.566 "trsvcid": "47890" 00:19:13.566 }, 00:19:13.566 "auth": { 00:19:13.566 "state": "completed", 00:19:13.566 "digest": "sha256", 00:19:13.566 "dhgroup": "ffdhe6144" 00:19:13.566 } 00:19:13.566 } 00:19:13.566 ]' 00:19:13.566 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.566 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.566 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.566 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.566 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.566 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.566 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.566 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.823 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:19:13.823 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:19:14.779 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.779 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:14.779 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.779 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.779 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.779 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.779 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.779 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.059 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:15.059 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.059 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:15.059 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:15.059 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:15.059 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.059 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.059 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.059 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.059 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.059 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.059 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.059 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.625 00:19:15.625 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.625 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.625 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.882 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.882 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.882 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.882 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.882 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.882 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.882 { 00:19:15.882 "cntlid": 35, 00:19:15.882 "qid": 0, 00:19:15.883 "state": "enabled", 00:19:15.883 "thread": "nvmf_tgt_poll_group_000", 00:19:15.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:15.883 "listen_address": { 00:19:15.883 "trtype": "TCP", 00:19:15.883 "adrfam": "IPv4", 00:19:15.883 "traddr": "10.0.0.2", 00:19:15.883 "trsvcid": "4420" 00:19:15.883 }, 00:19:15.883 "peer_address": { 00:19:15.883 "trtype": "TCP", 00:19:15.883 "adrfam": "IPv4", 00:19:15.883 "traddr": "10.0.0.1", 00:19:15.883 "trsvcid": "57524" 00:19:15.883 }, 00:19:15.883 "auth": { 00:19:15.883 "state": "completed", 00:19:15.883 "digest": "sha256", 00:19:15.883 "dhgroup": "ffdhe6144" 00:19:15.883 } 00:19:15.883 } 00:19:15.883 ]' 00:19:15.883 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.883 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.883 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.883 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.883 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.140 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.140 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.141 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.399 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:19:16.399 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:19:17.333 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.333 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:17.333 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.333 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.333 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.333 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.333 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:17.333 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:17.591 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:17.591 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.591 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:17.591 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:17.591 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:17.591 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.591 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.591 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.591 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.591 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.591 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.591 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.591 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.156 00:19:18.156 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.156 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.156 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.414 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.414 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.414 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.414 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.414 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.414 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.414 { 00:19:18.414 "cntlid": 37, 00:19:18.414 "qid": 0, 00:19:18.414 "state": "enabled", 00:19:18.414 "thread": "nvmf_tgt_poll_group_000", 00:19:18.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:18.414 "listen_address": { 00:19:18.414 "trtype": "TCP", 00:19:18.414 "adrfam": "IPv4", 00:19:18.414 "traddr": "10.0.0.2", 00:19:18.414 "trsvcid": "4420" 00:19:18.414 }, 00:19:18.414 "peer_address": { 00:19:18.414 "trtype": "TCP", 00:19:18.414 "adrfam": "IPv4", 00:19:18.414 "traddr": "10.0.0.1", 00:19:18.414 "trsvcid": "57546" 00:19:18.414 }, 00:19:18.414 "auth": { 00:19:18.414 "state": "completed", 00:19:18.414 "digest": "sha256", 00:19:18.414 "dhgroup": "ffdhe6144" 00:19:18.414 } 00:19:18.414 } 00:19:18.414 ]' 00:19:18.414 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.414 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.414 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.414 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:18.414 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.414 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.414 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.414 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.672 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:19:18.672 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:19:19.605 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.605 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:19.605 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.605 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.605 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.605 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.605 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:19.605 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:19.863 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:19.863 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.863 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:19.863 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:19.863 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:19.863 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.863 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:19.863 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.863 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.863 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.863 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:19.863 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:19.863 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.428 00:19:20.428 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.428 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.428 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.686 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.686 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.686 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.686 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.686 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.943 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.943 { 00:19:20.943 "cntlid": 39, 00:19:20.943 "qid": 0, 00:19:20.943 "state": "enabled", 00:19:20.943 "thread": "nvmf_tgt_poll_group_000", 00:19:20.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:20.944 "listen_address": { 00:19:20.944 "trtype": "TCP", 00:19:20.944 "adrfam": "IPv4", 00:19:20.944 "traddr": "10.0.0.2", 00:19:20.944 "trsvcid": "4420" 00:19:20.944 }, 00:19:20.944 "peer_address": { 00:19:20.944 "trtype": "TCP", 00:19:20.944 "adrfam": "IPv4", 00:19:20.944 "traddr": "10.0.0.1", 00:19:20.944 "trsvcid": "57568" 00:19:20.944 }, 00:19:20.944 "auth": { 00:19:20.944 "state": "completed", 00:19:20.944 "digest": "sha256", 00:19:20.944 "dhgroup": "ffdhe6144" 00:19:20.944 } 00:19:20.944 } 00:19:20.944 ]' 00:19:20.944 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.944 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.944 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.944 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:20.944 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.944 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.944 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.944 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.201 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:19:21.201 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:19:22.135 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.135 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:22.135 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.135 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.135 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.135 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.135 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.135 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.135 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.392 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:22.392 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.392 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.392 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:22.392 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:22.392 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.392 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.392 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.392 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.392 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.392 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.392 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.392 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.322 00:19:23.322 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.322 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.322 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.579 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.579 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.579 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.579 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.579 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.579 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.579 { 00:19:23.579 "cntlid": 41, 00:19:23.579 "qid": 0, 00:19:23.579 "state": "enabled", 00:19:23.579 "thread": "nvmf_tgt_poll_group_000", 00:19:23.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:23.579 "listen_address": { 00:19:23.579 "trtype": "TCP", 00:19:23.579 "adrfam": "IPv4", 00:19:23.579 "traddr": "10.0.0.2", 00:19:23.579 "trsvcid": "4420" 00:19:23.579 }, 00:19:23.579 "peer_address": { 00:19:23.579 "trtype": "TCP", 00:19:23.579 "adrfam": "IPv4", 00:19:23.579 "traddr": "10.0.0.1", 00:19:23.579 "trsvcid": "57582" 00:19:23.579 }, 00:19:23.579 "auth": { 00:19:23.579 "state": "completed", 00:19:23.579 "digest": "sha256", 00:19:23.579 "dhgroup": "ffdhe8192" 00:19:23.579 } 00:19:23.579 } 00:19:23.579 ]' 00:19:23.579 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.579 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.579 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.579 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:23.579 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.579 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.579 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.579 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.836 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:19:23.836 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:19:24.767 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.767 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:24.767 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.767 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.767 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.767 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.767 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:24.767 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.024 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:25.024 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.024 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.024 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:25.024 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:25.025 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.025 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.025 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.025 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.025 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.025 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.025 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.025 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.955 00:19:25.955 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.955 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.956 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.213 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.213 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.213 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.213 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.213 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.213 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.213 { 00:19:26.213 "cntlid": 43, 00:19:26.213 "qid": 0, 00:19:26.213 "state": "enabled", 00:19:26.213 "thread": "nvmf_tgt_poll_group_000", 00:19:26.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:26.213 "listen_address": { 00:19:26.213 "trtype": "TCP", 00:19:26.213 "adrfam": "IPv4", 00:19:26.213 "traddr": "10.0.0.2", 00:19:26.213 "trsvcid": "4420" 00:19:26.213 }, 00:19:26.213 "peer_address": { 00:19:26.213 "trtype": "TCP", 00:19:26.213 "adrfam": "IPv4", 00:19:26.213 "traddr": "10.0.0.1", 00:19:26.213 "trsvcid": "56000" 00:19:26.213 }, 00:19:26.213 "auth": { 00:19:26.213 "state": "completed", 00:19:26.213 "digest": "sha256", 00:19:26.213 "dhgroup": "ffdhe8192" 00:19:26.213 } 00:19:26.213 } 00:19:26.213 ]' 00:19:26.213 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.213 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.213 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.213 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.213 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.471 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.471 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.471 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.728 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:19:26.728 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:19:27.661 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.661 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:27.661 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.661 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.661 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.661 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.661 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.661 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.919 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:27.919 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.919 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.919 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:27.919 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:27.919 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.919 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.919 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.919 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.919 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.919 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.919 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.919 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.852 00:19:28.852 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.852 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.852 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.852 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.852 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.852 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.852 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.852 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.852 { 00:19:28.852 "cntlid": 45, 00:19:28.852 "qid": 0, 00:19:28.852 "state": "enabled", 00:19:28.852 "thread": "nvmf_tgt_poll_group_000", 00:19:28.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:28.852 "listen_address": { 00:19:28.852 "trtype": "TCP", 00:19:28.852 "adrfam": "IPv4", 00:19:28.852 "traddr": "10.0.0.2", 00:19:28.852 "trsvcid": "4420" 00:19:28.852 }, 00:19:28.852 "peer_address": { 00:19:28.852 "trtype": "TCP", 00:19:28.852 "adrfam": "IPv4", 00:19:28.852 "traddr": "10.0.0.1", 00:19:28.852 "trsvcid": "56036" 00:19:28.852 }, 00:19:28.852 "auth": { 00:19:28.852 "state": "completed", 00:19:28.852 "digest": "sha256", 00:19:28.852 "dhgroup": "ffdhe8192" 00:19:28.852 } 00:19:28.852 } 00:19:28.852 ]' 00:19:28.852 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.110 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.110 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.110 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.110 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.110 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.110 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.110 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.368 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:19:29.368 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:19:30.300 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.300 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:30.300 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.300 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.300 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.300 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.300 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:30.300 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:30.558 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:30.558 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.558 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.558 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:30.558 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:30.558 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.558 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:30.558 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.558 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.558 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.559 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:30.559 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.559 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:31.493 00:19:31.493 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.493 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.493 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.751 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.751 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.751 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.751 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.751 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.751 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.751 { 00:19:31.751 "cntlid": 47, 00:19:31.751 "qid": 0, 00:19:31.751 "state": "enabled", 00:19:31.751 "thread": "nvmf_tgt_poll_group_000", 00:19:31.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:31.751 "listen_address": { 00:19:31.751 "trtype": "TCP", 00:19:31.751 "adrfam": "IPv4", 00:19:31.751 "traddr": "10.0.0.2", 00:19:31.751 "trsvcid": "4420" 00:19:31.751 }, 00:19:31.751 "peer_address": { 00:19:31.751 "trtype": "TCP", 00:19:31.751 "adrfam": "IPv4", 00:19:31.751 "traddr": "10.0.0.1", 00:19:31.751 "trsvcid": "56060" 00:19:31.751 }, 00:19:31.751 "auth": { 00:19:31.751 "state": "completed", 00:19:31.751 "digest": "sha256", 00:19:31.751 "dhgroup": "ffdhe8192" 00:19:31.751 } 00:19:31.751 } 00:19:31.751 ]' 00:19:31.751 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.751 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.751 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.751 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:31.751 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.751 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.751 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.751 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.009 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:19:32.009 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:19:32.942 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.942 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:32.942 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.942 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.942 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.942 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:32.942 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.942 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.942 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:32.942 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.200 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:33.200 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.200 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:33.200 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:33.200 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:33.200 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.200 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.200 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.200 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.200 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.200 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.200 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.200 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.458 00:19:33.458 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.458 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.458 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.023 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.023 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.023 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.023 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.023 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.023 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.023 { 00:19:34.023 "cntlid": 49, 00:19:34.023 "qid": 0, 00:19:34.023 "state": "enabled", 00:19:34.023 "thread": "nvmf_tgt_poll_group_000", 00:19:34.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:34.023 "listen_address": { 00:19:34.023 "trtype": "TCP", 00:19:34.023 "adrfam": "IPv4", 00:19:34.023 "traddr": "10.0.0.2", 00:19:34.023 "trsvcid": "4420" 00:19:34.023 }, 00:19:34.023 "peer_address": { 00:19:34.023 "trtype": "TCP", 00:19:34.023 "adrfam": "IPv4", 00:19:34.023 "traddr": "10.0.0.1", 00:19:34.023 "trsvcid": "56084" 00:19:34.023 }, 00:19:34.023 "auth": { 00:19:34.023 "state": "completed", 00:19:34.023 "digest": "sha384", 00:19:34.023 "dhgroup": "null" 00:19:34.023 } 00:19:34.023 } 00:19:34.023 ]' 00:19:34.023 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.023 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.023 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.023 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:34.023 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.023 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.023 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.023 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.281 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:19:34.281 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:19:35.264 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.264 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:35.264 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.264 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.264 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.264 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.264 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.264 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.521 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:35.521 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.521 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:35.521 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:35.521 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:35.521 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.521 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.521 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.521 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.521 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.521 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.521 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.521 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.782 00:19:35.782 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.782 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.782 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.040 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.040 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.040 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.040 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.040 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.040 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.040 { 00:19:36.040 "cntlid": 51, 00:19:36.040 "qid": 0, 00:19:36.040 "state": "enabled", 00:19:36.040 "thread": "nvmf_tgt_poll_group_000", 00:19:36.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:36.040 "listen_address": { 00:19:36.040 "trtype": "TCP", 00:19:36.041 "adrfam": "IPv4", 00:19:36.041 "traddr": "10.0.0.2", 00:19:36.041 "trsvcid": "4420" 00:19:36.041 }, 00:19:36.041 "peer_address": { 00:19:36.041 "trtype": "TCP", 00:19:36.041 "adrfam": "IPv4", 00:19:36.041 "traddr": "10.0.0.1", 00:19:36.041 "trsvcid": "50158" 00:19:36.041 }, 00:19:36.041 "auth": { 00:19:36.041 "state": "completed", 00:19:36.041 "digest": "sha384", 00:19:36.041 "dhgroup": "null" 00:19:36.041 } 00:19:36.041 } 00:19:36.041 ]' 00:19:36.041 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.041 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.041 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.041 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:36.041 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.298 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.298 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.298 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.555 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:19:36.555 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:19:37.488 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.488 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:37.488 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.488 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.488 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.488 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.488 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:37.488 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:37.746 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:37.746 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.746 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:37.746 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:37.746 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:37.746 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.746 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.746 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.746 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.746 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.746 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.746 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.746 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.004 00:19:38.004 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.004 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.004 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.262 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.262 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.262 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.262 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.262 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.262 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.262 { 00:19:38.262 "cntlid": 53, 00:19:38.262 "qid": 0, 00:19:38.262 "state": "enabled", 00:19:38.262 "thread": "nvmf_tgt_poll_group_000", 00:19:38.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:38.262 "listen_address": { 00:19:38.262 "trtype": "TCP", 00:19:38.262 "adrfam": "IPv4", 00:19:38.262 "traddr": "10.0.0.2", 00:19:38.262 "trsvcid": "4420" 00:19:38.262 }, 00:19:38.262 "peer_address": { 00:19:38.262 "trtype": "TCP", 00:19:38.262 "adrfam": "IPv4", 00:19:38.262 "traddr": "10.0.0.1", 00:19:38.262 "trsvcid": "50194" 00:19:38.262 }, 00:19:38.262 "auth": { 00:19:38.262 "state": "completed", 00:19:38.262 "digest": "sha384", 00:19:38.262 "dhgroup": "null" 00:19:38.262 } 00:19:38.262 } 00:19:38.262 ]' 00:19:38.262 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.262 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.262 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.262 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:38.262 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.262 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.262 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.262 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.829 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:19:38.829 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:19:39.762 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.762 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:39.762 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.762 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.762 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.762 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.762 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.762 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.762 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:39.762 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.762 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:39.762 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:39.762 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:39.762 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.762 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:39.763 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.763 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.763 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.763 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:39.763 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.763 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.329 00:19:40.329 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.329 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.329 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.329 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.329 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.329 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.329 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.587 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.587 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.587 { 00:19:40.587 "cntlid": 55, 00:19:40.587 "qid": 0, 00:19:40.587 "state": "enabled", 00:19:40.587 "thread": "nvmf_tgt_poll_group_000", 00:19:40.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:40.587 "listen_address": { 00:19:40.587 "trtype": "TCP", 00:19:40.587 "adrfam": "IPv4", 00:19:40.587 "traddr": "10.0.0.2", 00:19:40.587 "trsvcid": "4420" 00:19:40.587 }, 00:19:40.587 "peer_address": { 00:19:40.587 "trtype": "TCP", 00:19:40.587 "adrfam": "IPv4", 00:19:40.587 "traddr": "10.0.0.1", 00:19:40.587 "trsvcid": "50222" 00:19:40.587 }, 00:19:40.587 "auth": { 00:19:40.587 "state": "completed", 00:19:40.587 "digest": "sha384", 00:19:40.587 "dhgroup": "null" 00:19:40.587 } 00:19:40.587 } 00:19:40.587 ]' 00:19:40.587 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.587 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.587 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.587 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:40.587 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.587 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.587 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.587 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.845 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:19:40.845 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:19:41.777 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.777 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:41.777 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.777 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.777 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.777 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.777 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.777 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.777 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.033 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:42.033 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.033 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:42.033 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:42.033 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.033 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.033 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.033 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.033 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.033 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.033 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.033 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.033 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.291 00:19:42.291 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.291 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.291 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.854 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.854 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.854 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.854 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.854 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.854 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.854 { 00:19:42.854 "cntlid": 57, 00:19:42.854 "qid": 0, 00:19:42.854 "state": "enabled", 00:19:42.854 "thread": "nvmf_tgt_poll_group_000", 00:19:42.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:42.854 "listen_address": { 00:19:42.854 "trtype": "TCP", 00:19:42.854 "adrfam": "IPv4", 00:19:42.854 "traddr": "10.0.0.2", 00:19:42.854 "trsvcid": "4420" 00:19:42.854 }, 00:19:42.854 "peer_address": { 00:19:42.854 "trtype": "TCP", 00:19:42.854 "adrfam": "IPv4", 00:19:42.854 "traddr": "10.0.0.1", 00:19:42.854 "trsvcid": "50248" 00:19:42.854 }, 00:19:42.854 "auth": { 00:19:42.854 "state": "completed", 00:19:42.854 "digest": "sha384", 00:19:42.854 "dhgroup": "ffdhe2048" 00:19:42.854 } 00:19:42.854 } 00:19:42.854 ]' 00:19:42.854 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.854 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.854 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.854 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:42.854 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.854 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.854 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.854 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.112 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:19:43.112 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:19:44.045 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.045 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:44.045 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.045 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.045 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.045 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.045 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:44.045 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:44.303 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:44.303 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.303 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:44.303 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:44.303 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:44.303 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.303 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.303 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.303 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.303 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.303 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.303 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.303 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.560 00:19:44.560 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.560 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.560 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.816 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.816 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.816 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.816 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.816 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.816 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.816 { 00:19:44.816 "cntlid": 59, 00:19:44.816 "qid": 0, 00:19:44.816 "state": "enabled", 00:19:44.816 "thread": "nvmf_tgt_poll_group_000", 00:19:44.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:44.816 "listen_address": { 00:19:44.816 "trtype": "TCP", 00:19:44.816 "adrfam": "IPv4", 00:19:44.816 "traddr": "10.0.0.2", 00:19:44.816 "trsvcid": "4420" 00:19:44.816 }, 00:19:44.816 "peer_address": { 00:19:44.816 "trtype": "TCP", 00:19:44.816 "adrfam": "IPv4", 00:19:44.816 "traddr": "10.0.0.1", 00:19:44.816 "trsvcid": "50274" 00:19:44.816 }, 00:19:44.816 "auth": { 00:19:44.816 "state": "completed", 00:19:44.816 "digest": "sha384", 00:19:44.816 "dhgroup": "ffdhe2048" 00:19:44.816 } 00:19:44.816 } 00:19:44.816 ]' 00:19:44.816 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.074 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.074 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.074 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.074 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.074 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.074 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.074 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.331 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:19:45.331 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:19:46.265 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.265 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:46.265 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.265 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.265 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.265 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.265 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:46.265 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:46.522 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:46.522 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.523 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:46.523 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:46.523 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:46.523 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.523 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.523 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.523 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.523 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.523 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.523 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.523 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.089 00:19:47.089 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.089 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.089 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.347 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.347 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.347 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.347 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.347 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.347 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.347 { 00:19:47.347 "cntlid": 61, 00:19:47.347 "qid": 0, 00:19:47.347 "state": "enabled", 00:19:47.347 "thread": "nvmf_tgt_poll_group_000", 00:19:47.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:47.347 "listen_address": { 00:19:47.347 "trtype": "TCP", 00:19:47.347 "adrfam": "IPv4", 00:19:47.347 "traddr": "10.0.0.2", 00:19:47.347 "trsvcid": "4420" 00:19:47.347 }, 00:19:47.347 "peer_address": { 00:19:47.347 "trtype": "TCP", 00:19:47.347 "adrfam": "IPv4", 00:19:47.347 "traddr": "10.0.0.1", 00:19:47.347 "trsvcid": "35158" 00:19:47.347 }, 00:19:47.347 "auth": { 00:19:47.347 "state": "completed", 00:19:47.347 "digest": "sha384", 00:19:47.347 "dhgroup": "ffdhe2048" 00:19:47.347 } 00:19:47.347 } 00:19:47.347 ]' 00:19:47.347 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.347 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.347 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.347 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:47.347 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.347 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.347 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.347 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.605 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:19:47.606 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:19:48.565 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.565 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:48.565 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.565 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.565 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.565 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.565 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.565 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.823 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:48.823 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.823 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:48.823 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:48.823 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:48.823 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.823 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:48.823 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.823 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.823 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.823 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:48.823 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.823 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.388 00:19:49.388 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.388 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.388 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.388 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.388 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.388 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.388 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.662 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.662 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.662 { 00:19:49.662 "cntlid": 63, 00:19:49.662 "qid": 0, 00:19:49.662 "state": "enabled", 00:19:49.662 "thread": "nvmf_tgt_poll_group_000", 00:19:49.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:49.662 "listen_address": { 00:19:49.662 "trtype": "TCP", 00:19:49.662 "adrfam": "IPv4", 00:19:49.662 "traddr": "10.0.0.2", 00:19:49.662 "trsvcid": "4420" 00:19:49.662 }, 00:19:49.662 "peer_address": { 00:19:49.662 "trtype": "TCP", 00:19:49.662 "adrfam": "IPv4", 00:19:49.662 "traddr": "10.0.0.1", 00:19:49.662 "trsvcid": "35192" 00:19:49.662 }, 00:19:49.662 "auth": { 00:19:49.662 "state": "completed", 00:19:49.662 "digest": "sha384", 00:19:49.662 "dhgroup": "ffdhe2048" 00:19:49.662 } 00:19:49.662 } 00:19:49.662 ]' 00:19:49.662 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.662 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.662 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.662 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:49.662 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.662 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.662 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.662 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.919 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:19:49.919 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:19:50.853 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.853 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:50.853 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.853 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.853 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.853 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.853 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.853 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.853 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:51.112 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:51.112 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.112 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:51.112 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:51.112 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:51.112 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.112 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.112 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.112 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.112 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.112 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.112 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.112 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.370 00:19:51.628 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.628 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.628 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.887 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.887 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.887 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.887 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.887 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.887 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.887 { 00:19:51.887 "cntlid": 65, 00:19:51.887 "qid": 0, 00:19:51.887 "state": "enabled", 00:19:51.887 "thread": "nvmf_tgt_poll_group_000", 00:19:51.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:51.887 "listen_address": { 00:19:51.887 "trtype": "TCP", 00:19:51.887 "adrfam": "IPv4", 00:19:51.887 "traddr": "10.0.0.2", 00:19:51.887 "trsvcid": "4420" 00:19:51.887 }, 00:19:51.887 "peer_address": { 00:19:51.887 "trtype": "TCP", 00:19:51.887 "adrfam": "IPv4", 00:19:51.887 "traddr": "10.0.0.1", 00:19:51.887 "trsvcid": "35224" 00:19:51.887 }, 00:19:51.887 "auth": { 00:19:51.887 "state": "completed", 00:19:51.887 "digest": "sha384", 00:19:51.887 "dhgroup": "ffdhe3072" 00:19:51.887 } 00:19:51.887 } 00:19:51.887 ]' 00:19:51.887 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.887 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.887 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.887 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.887 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.887 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.887 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.887 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.146 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:19:52.146 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:19:53.080 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.080 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:53.080 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.080 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.080 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.080 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.080 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:53.080 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:53.647 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:53.647 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.647 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:53.647 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:53.647 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:53.647 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.647 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.647 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.647 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.647 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.647 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.648 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.648 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.906 00:19:53.906 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.906 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.906 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.163 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.163 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.163 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.163 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.163 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.163 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.163 { 00:19:54.163 "cntlid": 67, 00:19:54.163 "qid": 0, 00:19:54.163 "state": "enabled", 00:19:54.163 "thread": "nvmf_tgt_poll_group_000", 00:19:54.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:54.163 "listen_address": { 00:19:54.163 "trtype": "TCP", 00:19:54.163 "adrfam": "IPv4", 00:19:54.163 "traddr": "10.0.0.2", 00:19:54.163 "trsvcid": "4420" 00:19:54.163 }, 00:19:54.163 "peer_address": { 00:19:54.163 "trtype": "TCP", 00:19:54.163 "adrfam": "IPv4", 00:19:54.163 "traddr": "10.0.0.1", 00:19:54.163 "trsvcid": "35234" 00:19:54.163 }, 00:19:54.163 "auth": { 00:19:54.163 "state": "completed", 00:19:54.163 "digest": "sha384", 00:19:54.163 "dhgroup": "ffdhe3072" 00:19:54.163 } 00:19:54.163 } 00:19:54.163 ]' 00:19:54.163 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.163 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.163 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.163 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:54.163 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.163 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.163 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.163 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.420 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:19:54.420 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:19:55.351 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.351 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:55.351 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.351 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.351 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.351 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.351 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:55.351 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:55.919 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:55.919 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.919 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:55.919 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:55.919 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:55.919 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.919 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.919 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.919 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.919 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.919 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.919 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.919 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.176 00:19:56.176 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.176 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.176 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.432 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.432 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.432 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.432 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.432 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.432 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.432 { 00:19:56.432 "cntlid": 69, 00:19:56.432 "qid": 0, 00:19:56.432 "state": "enabled", 00:19:56.432 "thread": "nvmf_tgt_poll_group_000", 00:19:56.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:56.432 "listen_address": { 00:19:56.432 "trtype": "TCP", 00:19:56.432 "adrfam": "IPv4", 00:19:56.432 "traddr": "10.0.0.2", 00:19:56.432 "trsvcid": "4420" 00:19:56.432 }, 00:19:56.432 "peer_address": { 00:19:56.432 "trtype": "TCP", 00:19:56.432 "adrfam": "IPv4", 00:19:56.432 "traddr": "10.0.0.1", 00:19:56.432 "trsvcid": "42022" 00:19:56.432 }, 00:19:56.432 "auth": { 00:19:56.432 "state": "completed", 00:19:56.432 "digest": "sha384", 00:19:56.432 "dhgroup": "ffdhe3072" 00:19:56.432 } 00:19:56.432 } 00:19:56.432 ]' 00:19:56.432 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.432 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.432 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.432 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:56.432 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.689 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.689 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.689 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.945 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:19:56.945 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:19:57.875 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.875 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:57.875 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.875 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.875 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.875 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.875 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:57.875 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:58.132 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:58.132 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.132 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:58.132 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:58.132 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:58.132 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.132 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:58.132 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.132 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.132 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.132 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:58.132 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.132 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.389 00:19:58.389 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.389 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.389 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.647 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.648 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.648 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.648 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.906 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.906 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.906 { 00:19:58.906 "cntlid": 71, 00:19:58.906 "qid": 0, 00:19:58.906 "state": "enabled", 00:19:58.906 "thread": "nvmf_tgt_poll_group_000", 00:19:58.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:58.906 "listen_address": { 00:19:58.906 "trtype": "TCP", 00:19:58.906 "adrfam": "IPv4", 00:19:58.906 "traddr": "10.0.0.2", 00:19:58.906 "trsvcid": "4420" 00:19:58.906 }, 00:19:58.906 "peer_address": { 00:19:58.906 "trtype": "TCP", 00:19:58.906 "adrfam": "IPv4", 00:19:58.906 "traddr": "10.0.0.1", 00:19:58.906 "trsvcid": "42064" 00:19:58.906 }, 00:19:58.906 "auth": { 00:19:58.906 "state": "completed", 00:19:58.906 "digest": "sha384", 00:19:58.906 "dhgroup": "ffdhe3072" 00:19:58.906 } 00:19:58.906 } 00:19:58.906 ]' 00:19:58.906 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.906 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.906 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.906 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.906 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.906 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.906 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.907 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.164 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:19:59.164 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:20:00.096 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.096 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.096 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.096 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.096 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.096 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.096 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.096 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:00.096 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:00.354 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:00.354 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.354 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.354 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:00.354 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.354 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.354 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.354 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.354 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.354 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.354 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.354 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.354 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.919 00:20:00.919 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.919 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.919 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.919 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.919 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.919 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.919 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.179 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.179 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.179 { 00:20:01.179 "cntlid": 73, 00:20:01.179 "qid": 0, 00:20:01.179 "state": "enabled", 00:20:01.179 "thread": "nvmf_tgt_poll_group_000", 00:20:01.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:01.179 "listen_address": { 00:20:01.179 "trtype": "TCP", 00:20:01.179 "adrfam": "IPv4", 00:20:01.179 "traddr": "10.0.0.2", 00:20:01.179 "trsvcid": "4420" 00:20:01.179 }, 00:20:01.179 "peer_address": { 00:20:01.179 "trtype": "TCP", 00:20:01.179 "adrfam": "IPv4", 00:20:01.179 "traddr": "10.0.0.1", 00:20:01.179 "trsvcid": "42088" 00:20:01.179 }, 00:20:01.179 "auth": { 00:20:01.179 "state": "completed", 00:20:01.179 "digest": "sha384", 00:20:01.179 "dhgroup": "ffdhe4096" 00:20:01.179 } 00:20:01.179 } 00:20:01.179 ]' 00:20:01.179 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.179 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.179 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.179 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:01.179 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.179 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.179 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.179 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.478 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:20:01.478 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:20:02.430 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.430 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:02.430 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.430 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.430 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.430 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.430 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:02.430 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:02.688 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:02.688 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.688 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:02.688 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:02.688 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:02.688 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.688 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.688 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.688 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.688 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.688 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.688 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.688 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.252 00:20:03.252 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.252 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.252 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.510 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.510 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.510 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.510 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.510 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.510 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.510 { 00:20:03.510 "cntlid": 75, 00:20:03.510 "qid": 0, 00:20:03.510 "state": "enabled", 00:20:03.510 "thread": "nvmf_tgt_poll_group_000", 00:20:03.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:03.510 "listen_address": { 00:20:03.510 "trtype": "TCP", 00:20:03.510 "adrfam": "IPv4", 00:20:03.510 "traddr": "10.0.0.2", 00:20:03.510 "trsvcid": "4420" 00:20:03.510 }, 00:20:03.510 "peer_address": { 00:20:03.510 "trtype": "TCP", 00:20:03.510 "adrfam": "IPv4", 00:20:03.510 "traddr": "10.0.0.1", 00:20:03.510 "trsvcid": "42120" 00:20:03.510 }, 00:20:03.510 "auth": { 00:20:03.510 "state": "completed", 00:20:03.510 "digest": "sha384", 00:20:03.510 "dhgroup": "ffdhe4096" 00:20:03.510 } 00:20:03.510 } 00:20:03.510 ]' 00:20:03.510 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.510 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.510 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.510 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:03.510 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.510 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.510 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.510 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.768 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:20:03.768 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:20:04.701 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.701 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:04.701 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.701 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.701 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.701 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.701 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:04.701 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:04.959 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:04.959 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.959 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.959 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:04.959 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:04.959 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.959 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.959 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.959 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.959 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.959 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.959 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.959 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.216 00:20:05.473 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.473 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.473 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.730 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.730 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.730 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.730 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.730 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.730 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.730 { 00:20:05.730 "cntlid": 77, 00:20:05.730 "qid": 0, 00:20:05.730 "state": "enabled", 00:20:05.730 "thread": "nvmf_tgt_poll_group_000", 00:20:05.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:05.730 "listen_address": { 00:20:05.730 "trtype": "TCP", 00:20:05.730 "adrfam": "IPv4", 00:20:05.730 "traddr": "10.0.0.2", 00:20:05.730 "trsvcid": "4420" 00:20:05.730 }, 00:20:05.730 "peer_address": { 00:20:05.730 "trtype": "TCP", 00:20:05.730 "adrfam": "IPv4", 00:20:05.730 "traddr": "10.0.0.1", 00:20:05.730 "trsvcid": "40526" 00:20:05.730 }, 00:20:05.730 "auth": { 00:20:05.730 "state": "completed", 00:20:05.730 "digest": "sha384", 00:20:05.730 "dhgroup": "ffdhe4096" 00:20:05.730 } 00:20:05.730 } 00:20:05.730 ]' 00:20:05.730 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.730 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.730 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.730 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:05.730 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.730 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.730 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.730 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.988 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:20:05.988 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:20:06.944 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.944 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:06.944 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.944 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.944 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.944 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.944 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.944 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:07.202 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:07.202 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.202 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.202 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:07.202 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.202 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.202 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:07.202 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.202 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.202 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.202 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.202 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.202 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.767 00:20:07.767 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.767 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.767 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.024 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.025 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.025 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.025 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.025 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.025 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.025 { 00:20:08.025 "cntlid": 79, 00:20:08.025 "qid": 0, 00:20:08.025 "state": "enabled", 00:20:08.025 "thread": "nvmf_tgt_poll_group_000", 00:20:08.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:08.025 "listen_address": { 00:20:08.025 "trtype": "TCP", 00:20:08.025 "adrfam": "IPv4", 00:20:08.025 "traddr": "10.0.0.2", 00:20:08.025 "trsvcid": "4420" 00:20:08.025 }, 00:20:08.025 "peer_address": { 00:20:08.025 "trtype": "TCP", 00:20:08.025 "adrfam": "IPv4", 00:20:08.025 "traddr": "10.0.0.1", 00:20:08.025 "trsvcid": "40552" 00:20:08.025 }, 00:20:08.025 "auth": { 00:20:08.025 "state": "completed", 00:20:08.025 "digest": "sha384", 00:20:08.025 "dhgroup": "ffdhe4096" 00:20:08.025 } 00:20:08.025 } 00:20:08.025 ]' 00:20:08.025 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.025 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.025 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.025 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:08.025 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.025 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.025 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.025 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.590 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:20:08.590 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:20:09.523 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.523 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:09.523 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.523 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.523 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.523 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.523 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.523 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:09.523 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:09.780 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:09.780 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.780 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.780 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:09.780 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:09.780 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.780 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.780 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.780 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.780 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.780 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.780 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.780 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.343 00:20:10.343 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.343 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.343 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.600 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.600 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.601 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.601 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.601 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.601 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.601 { 00:20:10.601 "cntlid": 81, 00:20:10.601 "qid": 0, 00:20:10.601 "state": "enabled", 00:20:10.601 "thread": "nvmf_tgt_poll_group_000", 00:20:10.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:10.601 "listen_address": { 00:20:10.601 "trtype": "TCP", 00:20:10.601 "adrfam": "IPv4", 00:20:10.601 "traddr": "10.0.0.2", 00:20:10.601 "trsvcid": "4420" 00:20:10.601 }, 00:20:10.601 "peer_address": { 00:20:10.601 "trtype": "TCP", 00:20:10.601 "adrfam": "IPv4", 00:20:10.601 "traddr": "10.0.0.1", 00:20:10.601 "trsvcid": "40584" 00:20:10.601 }, 00:20:10.601 "auth": { 00:20:10.601 "state": "completed", 00:20:10.601 "digest": "sha384", 00:20:10.601 "dhgroup": "ffdhe6144" 00:20:10.601 } 00:20:10.601 } 00:20:10.601 ]' 00:20:10.601 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.601 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.601 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.601 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:10.601 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.601 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.601 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.601 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.858 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:20:10.858 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:20:11.789 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.789 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:11.789 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.789 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.789 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.789 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.789 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:11.789 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:12.046 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:12.046 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.046 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:12.046 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:12.046 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:12.046 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.046 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.046 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.046 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.046 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.046 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.046 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.046 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.608 00:20:12.608 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.608 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.608 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.865 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.865 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.865 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.865 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.865 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.865 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.865 { 00:20:12.865 "cntlid": 83, 00:20:12.865 "qid": 0, 00:20:12.865 "state": "enabled", 00:20:12.865 "thread": "nvmf_tgt_poll_group_000", 00:20:12.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:12.865 "listen_address": { 00:20:12.865 "trtype": "TCP", 00:20:12.865 "adrfam": "IPv4", 00:20:12.865 "traddr": "10.0.0.2", 00:20:12.865 "trsvcid": "4420" 00:20:12.865 }, 00:20:12.865 "peer_address": { 00:20:12.865 "trtype": "TCP", 00:20:12.865 "adrfam": "IPv4", 00:20:12.865 "traddr": "10.0.0.1", 00:20:12.865 "trsvcid": "40614" 00:20:12.865 }, 00:20:12.865 "auth": { 00:20:12.865 "state": "completed", 00:20:12.865 "digest": "sha384", 00:20:12.865 "dhgroup": "ffdhe6144" 00:20:12.865 } 00:20:12.865 } 00:20:12.865 ]' 00:20:12.865 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.865 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.865 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.866 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:12.866 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.122 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.122 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.122 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.379 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:20:13.379 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:20:14.310 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.310 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:14.310 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.310 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.310 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.310 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.310 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:14.310 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:14.567 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:14.567 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.567 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.567 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:14.567 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:14.567 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.567 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.567 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.567 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.567 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.567 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.567 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.567 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.131 00:20:15.131 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.131 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.131 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.387 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.388 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.388 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.388 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.388 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.388 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.388 { 00:20:15.388 "cntlid": 85, 00:20:15.388 "qid": 0, 00:20:15.388 "state": "enabled", 00:20:15.388 "thread": "nvmf_tgt_poll_group_000", 00:20:15.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:15.388 "listen_address": { 00:20:15.388 "trtype": "TCP", 00:20:15.388 "adrfam": "IPv4", 00:20:15.388 "traddr": "10.0.0.2", 00:20:15.388 "trsvcid": "4420" 00:20:15.388 }, 00:20:15.388 "peer_address": { 00:20:15.388 "trtype": "TCP", 00:20:15.388 "adrfam": "IPv4", 00:20:15.388 "traddr": "10.0.0.1", 00:20:15.388 "trsvcid": "40648" 00:20:15.388 }, 00:20:15.388 "auth": { 00:20:15.388 "state": "completed", 00:20:15.388 "digest": "sha384", 00:20:15.388 "dhgroup": "ffdhe6144" 00:20:15.388 } 00:20:15.388 } 00:20:15.388 ]' 00:20:15.388 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.388 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.388 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.388 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:15.388 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.388 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.388 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.388 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.645 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:20:15.645 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:20:16.576 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.576 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:16.576 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.576 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.576 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.576 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.576 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:16.576 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:16.834 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:16.834 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.834 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.834 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:16.834 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.834 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.834 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:16.834 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.834 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.834 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.834 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.834 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.834 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.399 00:20:17.399 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.399 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.399 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.658 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.658 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.658 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.658 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.658 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.658 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.658 { 00:20:17.658 "cntlid": 87, 00:20:17.658 "qid": 0, 00:20:17.658 "state": "enabled", 00:20:17.658 "thread": "nvmf_tgt_poll_group_000", 00:20:17.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:17.658 "listen_address": { 00:20:17.658 "trtype": "TCP", 00:20:17.658 "adrfam": "IPv4", 00:20:17.658 "traddr": "10.0.0.2", 00:20:17.658 "trsvcid": "4420" 00:20:17.658 }, 00:20:17.658 "peer_address": { 00:20:17.658 "trtype": "TCP", 00:20:17.658 "adrfam": "IPv4", 00:20:17.658 "traddr": "10.0.0.1", 00:20:17.658 "trsvcid": "33772" 00:20:17.658 }, 00:20:17.658 "auth": { 00:20:17.658 "state": "completed", 00:20:17.658 "digest": "sha384", 00:20:17.658 "dhgroup": "ffdhe6144" 00:20:17.658 } 00:20:17.658 } 00:20:17.658 ]' 00:20:17.658 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.915 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.915 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.915 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.915 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.915 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.915 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.915 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.172 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:20:18.172 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:20:19.103 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.103 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:19.103 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.103 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.103 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.103 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.103 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.103 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:19.103 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:19.360 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:19.360 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.360 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.360 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:19.360 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:19.360 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.360 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.360 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.360 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.360 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.360 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.360 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.360 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.290 00:20:20.290 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.291 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.291 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.548 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.548 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.548 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.548 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.548 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.548 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.548 { 00:20:20.548 "cntlid": 89, 00:20:20.548 "qid": 0, 00:20:20.548 "state": "enabled", 00:20:20.548 "thread": "nvmf_tgt_poll_group_000", 00:20:20.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:20.548 "listen_address": { 00:20:20.548 "trtype": "TCP", 00:20:20.548 "adrfam": "IPv4", 00:20:20.548 "traddr": "10.0.0.2", 00:20:20.548 "trsvcid": "4420" 00:20:20.548 }, 00:20:20.548 "peer_address": { 00:20:20.548 "trtype": "TCP", 00:20:20.548 "adrfam": "IPv4", 00:20:20.548 "traddr": "10.0.0.1", 00:20:20.548 "trsvcid": "33798" 00:20:20.548 }, 00:20:20.548 "auth": { 00:20:20.548 "state": "completed", 00:20:20.548 "digest": "sha384", 00:20:20.548 "dhgroup": "ffdhe8192" 00:20:20.548 } 00:20:20.548 } 00:20:20.548 ]' 00:20:20.548 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.548 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.548 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.548 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:20.548 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.548 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.548 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.548 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.112 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:20:21.112 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.045 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.978 00:20:22.978 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.978 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.978 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.237 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.237 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.237 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.237 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.237 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.237 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.237 { 00:20:23.237 "cntlid": 91, 00:20:23.237 "qid": 0, 00:20:23.237 "state": "enabled", 00:20:23.237 "thread": "nvmf_tgt_poll_group_000", 00:20:23.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:23.237 "listen_address": { 00:20:23.237 "trtype": "TCP", 00:20:23.237 "adrfam": "IPv4", 00:20:23.237 "traddr": "10.0.0.2", 00:20:23.237 "trsvcid": "4420" 00:20:23.237 }, 00:20:23.237 "peer_address": { 00:20:23.237 "trtype": "TCP", 00:20:23.237 "adrfam": "IPv4", 00:20:23.237 "traddr": "10.0.0.1", 00:20:23.237 "trsvcid": "33834" 00:20:23.237 }, 00:20:23.237 "auth": { 00:20:23.237 "state": "completed", 00:20:23.237 "digest": "sha384", 00:20:23.237 "dhgroup": "ffdhe8192" 00:20:23.237 } 00:20:23.237 } 00:20:23.237 ]' 00:20:23.237 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.237 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.237 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.494 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:23.494 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.494 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.494 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.494 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.759 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:20:23.760 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:20:24.700 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.700 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:24.700 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.700 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.700 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.700 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.700 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:24.700 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:24.958 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:24.958 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.958 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.958 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:24.958 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:24.958 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.958 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.958 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.958 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.958 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.958 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.958 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.958 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.915 00:20:25.915 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.915 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.915 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.195 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.195 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.195 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.195 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.195 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.195 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.195 { 00:20:26.195 "cntlid": 93, 00:20:26.195 "qid": 0, 00:20:26.195 "state": "enabled", 00:20:26.195 "thread": "nvmf_tgt_poll_group_000", 00:20:26.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:26.195 "listen_address": { 00:20:26.195 "trtype": "TCP", 00:20:26.195 "adrfam": "IPv4", 00:20:26.195 "traddr": "10.0.0.2", 00:20:26.195 "trsvcid": "4420" 00:20:26.195 }, 00:20:26.195 "peer_address": { 00:20:26.195 "trtype": "TCP", 00:20:26.195 "adrfam": "IPv4", 00:20:26.195 "traddr": "10.0.0.1", 00:20:26.195 "trsvcid": "38788" 00:20:26.195 }, 00:20:26.195 "auth": { 00:20:26.195 "state": "completed", 00:20:26.195 "digest": "sha384", 00:20:26.195 "dhgroup": "ffdhe8192" 00:20:26.195 } 00:20:26.195 } 00:20:26.195 ]' 00:20:26.195 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.195 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.195 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.195 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.195 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.195 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.195 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.195 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.760 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:20:26.761 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:20:27.692 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.692 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:27.692 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.692 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.692 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.692 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.692 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:27.692 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:27.692 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:27.692 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.692 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.692 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:27.692 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.692 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.692 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:27.692 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.692 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.692 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.692 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.692 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.692 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.621 00:20:28.621 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.621 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.621 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.878 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.878 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.878 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.878 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.878 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.878 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.878 { 00:20:28.878 "cntlid": 95, 00:20:28.878 "qid": 0, 00:20:28.878 "state": "enabled", 00:20:28.878 "thread": "nvmf_tgt_poll_group_000", 00:20:28.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:28.878 "listen_address": { 00:20:28.878 "trtype": "TCP", 00:20:28.878 "adrfam": "IPv4", 00:20:28.878 "traddr": "10.0.0.2", 00:20:28.878 "trsvcid": "4420" 00:20:28.878 }, 00:20:28.878 "peer_address": { 00:20:28.878 "trtype": "TCP", 00:20:28.878 "adrfam": "IPv4", 00:20:28.878 "traddr": "10.0.0.1", 00:20:28.878 "trsvcid": "38818" 00:20:28.878 }, 00:20:28.878 "auth": { 00:20:28.878 "state": "completed", 00:20:28.878 "digest": "sha384", 00:20:28.878 "dhgroup": "ffdhe8192" 00:20:28.878 } 00:20:28.878 } 00:20:28.878 ]' 00:20:28.878 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.878 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.878 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.878 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.878 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.134 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.134 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.134 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.390 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:20:29.390 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.321 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.885 00:20:30.885 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.885 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.885 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.143 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.143 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.143 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.143 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.143 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.143 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.143 { 00:20:31.143 "cntlid": 97, 00:20:31.143 "qid": 0, 00:20:31.143 "state": "enabled", 00:20:31.143 "thread": "nvmf_tgt_poll_group_000", 00:20:31.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:31.143 "listen_address": { 00:20:31.143 "trtype": "TCP", 00:20:31.143 "adrfam": "IPv4", 00:20:31.143 "traddr": "10.0.0.2", 00:20:31.143 "trsvcid": "4420" 00:20:31.143 }, 00:20:31.143 "peer_address": { 00:20:31.143 "trtype": "TCP", 00:20:31.143 "adrfam": "IPv4", 00:20:31.143 "traddr": "10.0.0.1", 00:20:31.143 "trsvcid": "38842" 00:20:31.143 }, 00:20:31.143 "auth": { 00:20:31.143 "state": "completed", 00:20:31.143 "digest": "sha512", 00:20:31.143 "dhgroup": "null" 00:20:31.143 } 00:20:31.143 } 00:20:31.143 ]' 00:20:31.143 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.143 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.143 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.143 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:31.143 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.143 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.143 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.143 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.400 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:20:31.400 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:20:32.331 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.331 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:32.331 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.331 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.331 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.331 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.331 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:32.331 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:32.589 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:32.589 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.589 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:32.589 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:32.589 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:32.589 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.589 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.589 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.589 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.589 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.589 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.589 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.589 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.847 00:20:32.847 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.847 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.847 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.103 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.103 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.103 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.103 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.360 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.360 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.360 { 00:20:33.360 "cntlid": 99, 00:20:33.360 "qid": 0, 00:20:33.360 "state": "enabled", 00:20:33.360 "thread": "nvmf_tgt_poll_group_000", 00:20:33.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:33.360 "listen_address": { 00:20:33.360 "trtype": "TCP", 00:20:33.360 "adrfam": "IPv4", 00:20:33.360 "traddr": "10.0.0.2", 00:20:33.360 "trsvcid": "4420" 00:20:33.360 }, 00:20:33.360 "peer_address": { 00:20:33.360 "trtype": "TCP", 00:20:33.360 "adrfam": "IPv4", 00:20:33.360 "traddr": "10.0.0.1", 00:20:33.360 "trsvcid": "38876" 00:20:33.360 }, 00:20:33.360 "auth": { 00:20:33.360 "state": "completed", 00:20:33.360 "digest": "sha512", 00:20:33.360 "dhgroup": "null" 00:20:33.360 } 00:20:33.360 } 00:20:33.360 ]' 00:20:33.360 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.360 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.360 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.360 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:33.360 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.360 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.360 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.360 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.617 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:20:33.617 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:20:34.549 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.549 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:34.549 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.549 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.549 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.549 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.549 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:34.549 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:34.806 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:34.806 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.806 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:34.806 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:34.806 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:34.806 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.806 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.806 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.806 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.806 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.806 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.806 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.806 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.063 00:20:35.063 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.063 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.063 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.321 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.321 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.321 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.321 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.321 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.321 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.321 { 00:20:35.321 "cntlid": 101, 00:20:35.321 "qid": 0, 00:20:35.321 "state": "enabled", 00:20:35.321 "thread": "nvmf_tgt_poll_group_000", 00:20:35.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:35.321 "listen_address": { 00:20:35.321 "trtype": "TCP", 00:20:35.321 "adrfam": "IPv4", 00:20:35.321 "traddr": "10.0.0.2", 00:20:35.321 "trsvcid": "4420" 00:20:35.321 }, 00:20:35.321 "peer_address": { 00:20:35.321 "trtype": "TCP", 00:20:35.321 "adrfam": "IPv4", 00:20:35.321 "traddr": "10.0.0.1", 00:20:35.321 "trsvcid": "53778" 00:20:35.321 }, 00:20:35.321 "auth": { 00:20:35.321 "state": "completed", 00:20:35.321 "digest": "sha512", 00:20:35.321 "dhgroup": "null" 00:20:35.321 } 00:20:35.321 } 00:20:35.321 ]' 00:20:35.321 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.321 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.321 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.578 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:35.578 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.578 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.578 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.578 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.835 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:20:35.835 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:20:36.766 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.766 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:36.766 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.766 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.766 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.766 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.766 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:36.766 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:37.024 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:37.024 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.024 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:37.024 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:37.024 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:37.024 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.024 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:37.024 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.024 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.024 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.024 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:37.024 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.024 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.282 00:20:37.282 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.282 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.282 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.540 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.540 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.540 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.540 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.540 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.540 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.540 { 00:20:37.540 "cntlid": 103, 00:20:37.540 "qid": 0, 00:20:37.540 "state": "enabled", 00:20:37.540 "thread": "nvmf_tgt_poll_group_000", 00:20:37.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:37.540 "listen_address": { 00:20:37.540 "trtype": "TCP", 00:20:37.540 "adrfam": "IPv4", 00:20:37.540 "traddr": "10.0.0.2", 00:20:37.540 "trsvcid": "4420" 00:20:37.540 }, 00:20:37.540 "peer_address": { 00:20:37.540 "trtype": "TCP", 00:20:37.540 "adrfam": "IPv4", 00:20:37.540 "traddr": "10.0.0.1", 00:20:37.540 "trsvcid": "53824" 00:20:37.540 }, 00:20:37.540 "auth": { 00:20:37.540 "state": "completed", 00:20:37.540 "digest": "sha512", 00:20:37.540 "dhgroup": "null" 00:20:37.540 } 00:20:37.540 } 00:20:37.540 ]' 00:20:37.540 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.540 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.797 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.797 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:37.797 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.797 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.798 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.798 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.054 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:20:38.054 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:20:38.984 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.984 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:38.984 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.984 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.984 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.984 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.984 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.984 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:38.984 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.241 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:39.241 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.241 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:39.241 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:39.241 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:39.241 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.241 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.241 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.241 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.241 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.241 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.241 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.241 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.498 00:20:39.498 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.498 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.498 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.755 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.755 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.755 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.755 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.755 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.755 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.755 { 00:20:39.755 "cntlid": 105, 00:20:39.755 "qid": 0, 00:20:39.755 "state": "enabled", 00:20:39.755 "thread": "nvmf_tgt_poll_group_000", 00:20:39.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:39.755 "listen_address": { 00:20:39.755 "trtype": "TCP", 00:20:39.755 "adrfam": "IPv4", 00:20:39.755 "traddr": "10.0.0.2", 00:20:39.755 "trsvcid": "4420" 00:20:39.755 }, 00:20:39.755 "peer_address": { 00:20:39.755 "trtype": "TCP", 00:20:39.755 "adrfam": "IPv4", 00:20:39.755 "traddr": "10.0.0.1", 00:20:39.755 "trsvcid": "53858" 00:20:39.755 }, 00:20:39.755 "auth": { 00:20:39.755 "state": "completed", 00:20:39.755 "digest": "sha512", 00:20:39.755 "dhgroup": "ffdhe2048" 00:20:39.755 } 00:20:39.755 } 00:20:39.755 ]' 00:20:39.755 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.755 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.755 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.012 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:40.012 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.012 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.012 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.012 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.270 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:20:40.270 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:20:41.202 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.202 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:41.202 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.202 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.202 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.202 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.202 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:41.202 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:41.459 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:41.459 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.459 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:41.459 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:41.459 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:41.459 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.459 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.459 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.459 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.459 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.459 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.459 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.459 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.716 00:20:41.716 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.716 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.716 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.973 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.229 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.229 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.229 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.229 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.229 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.229 { 00:20:42.229 "cntlid": 107, 00:20:42.229 "qid": 0, 00:20:42.229 "state": "enabled", 00:20:42.229 "thread": "nvmf_tgt_poll_group_000", 00:20:42.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:42.229 "listen_address": { 00:20:42.229 "trtype": "TCP", 00:20:42.229 "adrfam": "IPv4", 00:20:42.229 "traddr": "10.0.0.2", 00:20:42.229 "trsvcid": "4420" 00:20:42.229 }, 00:20:42.229 "peer_address": { 00:20:42.229 "trtype": "TCP", 00:20:42.229 "adrfam": "IPv4", 00:20:42.229 "traddr": "10.0.0.1", 00:20:42.229 "trsvcid": "53880" 00:20:42.229 }, 00:20:42.229 "auth": { 00:20:42.229 "state": "completed", 00:20:42.229 "digest": "sha512", 00:20:42.229 "dhgroup": "ffdhe2048" 00:20:42.229 } 00:20:42.229 } 00:20:42.229 ]' 00:20:42.229 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.229 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.229 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.229 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:42.229 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.229 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.229 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.229 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.485 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:20:42.485 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:20:43.415 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.415 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:43.415 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.415 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.415 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.415 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.415 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.415 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.672 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:43.672 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.672 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:43.672 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:43.672 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:43.672 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.672 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.672 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.672 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.672 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.672 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.672 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.672 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.928 00:20:43.928 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.928 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.928 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.185 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.186 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.186 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.186 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.186 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.186 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.186 { 00:20:44.186 "cntlid": 109, 00:20:44.186 "qid": 0, 00:20:44.186 "state": "enabled", 00:20:44.186 "thread": "nvmf_tgt_poll_group_000", 00:20:44.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:44.186 "listen_address": { 00:20:44.186 "trtype": "TCP", 00:20:44.186 "adrfam": "IPv4", 00:20:44.186 "traddr": "10.0.0.2", 00:20:44.186 "trsvcid": "4420" 00:20:44.186 }, 00:20:44.186 "peer_address": { 00:20:44.186 "trtype": "TCP", 00:20:44.186 "adrfam": "IPv4", 00:20:44.186 "traddr": "10.0.0.1", 00:20:44.186 "trsvcid": "53902" 00:20:44.186 }, 00:20:44.186 "auth": { 00:20:44.186 "state": "completed", 00:20:44.186 "digest": "sha512", 00:20:44.186 "dhgroup": "ffdhe2048" 00:20:44.186 } 00:20:44.186 } 00:20:44.186 ]' 00:20:44.186 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.443 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.443 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.443 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:44.443 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.443 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.443 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.443 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.700 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:20:44.700 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:20:45.629 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.629 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:45.629 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.629 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.629 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.629 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.629 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.629 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.888 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:45.888 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.888 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:45.888 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:45.888 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.888 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.888 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:45.888 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.888 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.888 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.888 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.888 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.888 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.146 00:20:46.146 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.146 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.146 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.403 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.403 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.403 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.403 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.403 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.403 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.403 { 00:20:46.403 "cntlid": 111, 00:20:46.403 "qid": 0, 00:20:46.403 "state": "enabled", 00:20:46.403 "thread": "nvmf_tgt_poll_group_000", 00:20:46.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:46.403 "listen_address": { 00:20:46.403 "trtype": "TCP", 00:20:46.403 "adrfam": "IPv4", 00:20:46.403 "traddr": "10.0.0.2", 00:20:46.403 "trsvcid": "4420" 00:20:46.403 }, 00:20:46.403 "peer_address": { 00:20:46.403 "trtype": "TCP", 00:20:46.403 "adrfam": "IPv4", 00:20:46.403 "traddr": "10.0.0.1", 00:20:46.403 "trsvcid": "58018" 00:20:46.403 }, 00:20:46.403 "auth": { 00:20:46.403 "state": "completed", 00:20:46.403 "digest": "sha512", 00:20:46.403 "dhgroup": "ffdhe2048" 00:20:46.403 } 00:20:46.403 } 00:20:46.403 ]' 00:20:46.403 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.661 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.661 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.661 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:46.661 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.661 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.661 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.661 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.919 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:20:46.919 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:20:47.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:47.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:47.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:48.109 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:48.109 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.109 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:48.109 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:48.109 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:48.109 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.109 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.109 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.109 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.109 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.109 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.109 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.367 00:20:48.367 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.367 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.367 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.625 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.625 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.625 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.625 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.882 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.882 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.882 { 00:20:48.882 "cntlid": 113, 00:20:48.882 "qid": 0, 00:20:48.882 "state": "enabled", 00:20:48.882 "thread": "nvmf_tgt_poll_group_000", 00:20:48.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:48.882 "listen_address": { 00:20:48.882 "trtype": "TCP", 00:20:48.882 "adrfam": "IPv4", 00:20:48.882 "traddr": "10.0.0.2", 00:20:48.882 "trsvcid": "4420" 00:20:48.882 }, 00:20:48.882 "peer_address": { 00:20:48.882 "trtype": "TCP", 00:20:48.882 "adrfam": "IPv4", 00:20:48.882 "traddr": "10.0.0.1", 00:20:48.882 "trsvcid": "58054" 00:20:48.882 }, 00:20:48.882 "auth": { 00:20:48.882 "state": "completed", 00:20:48.882 "digest": "sha512", 00:20:48.882 "dhgroup": "ffdhe3072" 00:20:48.882 } 00:20:48.882 } 00:20:48.882 ]' 00:20:48.882 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.882 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.882 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.882 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:48.882 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.882 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.882 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.882 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.139 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:20:49.140 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:20:50.071 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.071 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:50.071 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.071 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.071 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.071 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.071 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.071 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.328 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:50.328 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.328 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.328 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:50.328 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:50.328 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.328 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.328 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.328 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.328 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.328 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.328 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.328 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.590 00:20:50.590 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.590 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.590 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.888 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.888 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.888 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.888 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.888 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.888 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.888 { 00:20:50.888 "cntlid": 115, 00:20:50.888 "qid": 0, 00:20:50.888 "state": "enabled", 00:20:50.888 "thread": "nvmf_tgt_poll_group_000", 00:20:50.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:50.888 "listen_address": { 00:20:50.888 "trtype": "TCP", 00:20:50.888 "adrfam": "IPv4", 00:20:50.888 "traddr": "10.0.0.2", 00:20:50.888 "trsvcid": "4420" 00:20:50.888 }, 00:20:50.888 "peer_address": { 00:20:50.888 "trtype": "TCP", 00:20:50.888 "adrfam": "IPv4", 00:20:50.888 "traddr": "10.0.0.1", 00:20:50.888 "trsvcid": "58068" 00:20:50.888 }, 00:20:50.888 "auth": { 00:20:50.888 "state": "completed", 00:20:50.888 "digest": "sha512", 00:20:50.888 "dhgroup": "ffdhe3072" 00:20:50.888 } 00:20:50.888 } 00:20:50.888 ]' 00:20:50.888 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.148 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.148 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.148 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.148 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.148 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.148 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.148 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.404 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:20:51.405 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:20:52.391 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.391 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:52.391 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.391 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.391 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.391 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.391 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.391 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.667 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:52.667 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.667 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:52.667 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:52.667 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:52.667 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.667 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.667 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.667 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.667 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.667 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.667 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.667 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.923 00:20:52.923 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.923 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.923 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.181 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.181 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.181 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.181 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.181 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.181 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.181 { 00:20:53.181 "cntlid": 117, 00:20:53.181 "qid": 0, 00:20:53.181 "state": "enabled", 00:20:53.181 "thread": "nvmf_tgt_poll_group_000", 00:20:53.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:53.181 "listen_address": { 00:20:53.181 "trtype": "TCP", 00:20:53.181 "adrfam": "IPv4", 00:20:53.181 "traddr": "10.0.0.2", 00:20:53.181 "trsvcid": "4420" 00:20:53.181 }, 00:20:53.181 "peer_address": { 00:20:53.181 "trtype": "TCP", 00:20:53.181 "adrfam": "IPv4", 00:20:53.181 "traddr": "10.0.0.1", 00:20:53.181 "trsvcid": "58098" 00:20:53.181 }, 00:20:53.181 "auth": { 00:20:53.181 "state": "completed", 00:20:53.181 "digest": "sha512", 00:20:53.181 "dhgroup": "ffdhe3072" 00:20:53.181 } 00:20:53.181 } 00:20:53.181 ]' 00:20:53.181 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.181 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.181 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.181 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:53.181 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.181 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.181 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.181 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.744 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:20:53.744 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:20:54.307 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.564 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:54.564 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.564 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.564 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.564 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.564 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:54.564 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:54.821 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:54.821 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.821 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:54.821 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:54.821 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:54.821 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.822 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:54.822 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.822 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.822 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.822 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:54.822 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.822 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.080 00:20:55.080 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.080 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.080 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.338 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.338 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.338 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.338 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.338 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.338 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.338 { 00:20:55.338 "cntlid": 119, 00:20:55.338 "qid": 0, 00:20:55.338 "state": "enabled", 00:20:55.338 "thread": "nvmf_tgt_poll_group_000", 00:20:55.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:55.338 "listen_address": { 00:20:55.338 "trtype": "TCP", 00:20:55.338 "adrfam": "IPv4", 00:20:55.338 "traddr": "10.0.0.2", 00:20:55.338 "trsvcid": "4420" 00:20:55.338 }, 00:20:55.338 "peer_address": { 00:20:55.338 "trtype": "TCP", 00:20:55.338 "adrfam": "IPv4", 00:20:55.338 "traddr": "10.0.0.1", 00:20:55.338 "trsvcid": "41844" 00:20:55.338 }, 00:20:55.338 "auth": { 00:20:55.338 "state": "completed", 00:20:55.338 "digest": "sha512", 00:20:55.338 "dhgroup": "ffdhe3072" 00:20:55.338 } 00:20:55.338 } 00:20:55.338 ]' 00:20:55.338 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.338 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.338 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.597 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:55.597 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.597 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.597 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.597 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.855 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:20:55.855 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:20:56.789 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.789 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:56.789 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.789 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.789 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.789 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.789 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.789 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:56.789 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.046 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:57.046 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.046 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.046 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:57.046 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:57.046 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.046 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.047 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.047 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.047 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.047 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.047 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.047 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.304 00:20:57.304 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.304 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.304 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.562 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.562 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.562 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.562 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.562 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.562 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.562 { 00:20:57.562 "cntlid": 121, 00:20:57.562 "qid": 0, 00:20:57.562 "state": "enabled", 00:20:57.562 "thread": "nvmf_tgt_poll_group_000", 00:20:57.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:57.562 "listen_address": { 00:20:57.562 "trtype": "TCP", 00:20:57.562 "adrfam": "IPv4", 00:20:57.562 "traddr": "10.0.0.2", 00:20:57.562 "trsvcid": "4420" 00:20:57.562 }, 00:20:57.562 "peer_address": { 00:20:57.562 "trtype": "TCP", 00:20:57.562 "adrfam": "IPv4", 00:20:57.562 "traddr": "10.0.0.1", 00:20:57.562 "trsvcid": "41872" 00:20:57.562 }, 00:20:57.562 "auth": { 00:20:57.562 "state": "completed", 00:20:57.562 "digest": "sha512", 00:20:57.562 "dhgroup": "ffdhe4096" 00:20:57.562 } 00:20:57.562 } 00:20:57.562 ]' 00:20:57.562 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.562 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.562 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.820 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:57.820 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.820 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.820 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.820 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.078 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:20:58.078 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:20:59.011 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.011 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:59.011 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.012 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.012 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.012 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.012 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:59.012 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:59.269 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:59.269 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.269 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.269 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:59.269 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:59.269 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.269 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.269 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.269 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.269 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.269 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.269 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.269 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.527 00:20:59.527 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.527 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.527 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.784 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.784 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.784 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.784 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.040 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.041 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.041 { 00:21:00.041 "cntlid": 123, 00:21:00.041 "qid": 0, 00:21:00.041 "state": "enabled", 00:21:00.041 "thread": "nvmf_tgt_poll_group_000", 00:21:00.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:00.041 "listen_address": { 00:21:00.041 "trtype": "TCP", 00:21:00.041 "adrfam": "IPv4", 00:21:00.041 "traddr": "10.0.0.2", 00:21:00.041 "trsvcid": "4420" 00:21:00.041 }, 00:21:00.041 "peer_address": { 00:21:00.041 "trtype": "TCP", 00:21:00.041 "adrfam": "IPv4", 00:21:00.041 "traddr": "10.0.0.1", 00:21:00.041 "trsvcid": "41890" 00:21:00.041 }, 00:21:00.041 "auth": { 00:21:00.041 "state": "completed", 00:21:00.041 "digest": "sha512", 00:21:00.041 "dhgroup": "ffdhe4096" 00:21:00.041 } 00:21:00.041 } 00:21:00.041 ]' 00:21:00.041 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.041 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.041 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.041 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.041 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.041 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.041 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.041 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.298 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:21:00.298 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:21:01.230 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.230 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:01.230 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.230 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.230 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.230 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.230 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.230 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.487 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:01.487 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.488 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.488 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:01.488 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.488 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.488 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.488 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.488 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.488 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.488 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.488 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.488 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.052 00:21:02.052 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.052 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.052 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.309 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.309 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.309 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.309 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.309 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.309 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.309 { 00:21:02.309 "cntlid": 125, 00:21:02.309 "qid": 0, 00:21:02.309 "state": "enabled", 00:21:02.309 "thread": "nvmf_tgt_poll_group_000", 00:21:02.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:02.309 "listen_address": { 00:21:02.309 "trtype": "TCP", 00:21:02.309 "adrfam": "IPv4", 00:21:02.309 "traddr": "10.0.0.2", 00:21:02.309 "trsvcid": "4420" 00:21:02.309 }, 00:21:02.309 "peer_address": { 00:21:02.309 "trtype": "TCP", 00:21:02.309 "adrfam": "IPv4", 00:21:02.309 "traddr": "10.0.0.1", 00:21:02.309 "trsvcid": "41936" 00:21:02.309 }, 00:21:02.309 "auth": { 00:21:02.309 "state": "completed", 00:21:02.309 "digest": "sha512", 00:21:02.309 "dhgroup": "ffdhe4096" 00:21:02.309 } 00:21:02.309 } 00:21:02.309 ]' 00:21:02.309 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.309 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.309 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.309 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.309 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.309 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.309 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.309 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.567 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:21:02.567 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:21:03.499 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.499 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:03.499 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.499 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.499 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.499 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.499 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:03.499 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:03.756 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:03.756 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.756 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.756 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:03.756 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:03.756 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.756 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:03.756 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.756 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.756 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.756 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:03.756 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.756 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.321 00:21:04.321 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.321 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.321 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.579 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.579 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.579 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.579 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.579 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.579 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.579 { 00:21:04.579 "cntlid": 127, 00:21:04.579 "qid": 0, 00:21:04.579 "state": "enabled", 00:21:04.579 "thread": "nvmf_tgt_poll_group_000", 00:21:04.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:04.579 "listen_address": { 00:21:04.579 "trtype": "TCP", 00:21:04.579 "adrfam": "IPv4", 00:21:04.579 "traddr": "10.0.0.2", 00:21:04.579 "trsvcid": "4420" 00:21:04.579 }, 00:21:04.579 "peer_address": { 00:21:04.579 "trtype": "TCP", 00:21:04.579 "adrfam": "IPv4", 00:21:04.579 "traddr": "10.0.0.1", 00:21:04.579 "trsvcid": "41966" 00:21:04.579 }, 00:21:04.579 "auth": { 00:21:04.579 "state": "completed", 00:21:04.579 "digest": "sha512", 00:21:04.579 "dhgroup": "ffdhe4096" 00:21:04.579 } 00:21:04.579 } 00:21:04.579 ]' 00:21:04.579 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.579 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.579 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.579 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.579 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.579 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.579 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.579 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.837 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:21:04.837 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:21:05.771 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.771 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:05.771 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.771 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.771 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.771 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.771 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.771 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:05.771 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.029 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:06.029 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.029 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.029 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:06.029 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:06.029 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.029 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.029 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.029 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.029 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.029 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.029 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.029 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.594 00:21:06.594 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.594 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.594 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.851 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.851 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.851 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.851 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.851 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.851 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.851 { 00:21:06.851 "cntlid": 129, 00:21:06.851 "qid": 0, 00:21:06.851 "state": "enabled", 00:21:06.851 "thread": "nvmf_tgt_poll_group_000", 00:21:06.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:06.851 "listen_address": { 00:21:06.851 "trtype": "TCP", 00:21:06.851 "adrfam": "IPv4", 00:21:06.851 "traddr": "10.0.0.2", 00:21:06.851 "trsvcid": "4420" 00:21:06.852 }, 00:21:06.852 "peer_address": { 00:21:06.852 "trtype": "TCP", 00:21:06.852 "adrfam": "IPv4", 00:21:06.852 "traddr": "10.0.0.1", 00:21:06.852 "trsvcid": "60996" 00:21:06.852 }, 00:21:06.852 "auth": { 00:21:06.852 "state": "completed", 00:21:06.852 "digest": "sha512", 00:21:06.852 "dhgroup": "ffdhe6144" 00:21:06.852 } 00:21:06.852 } 00:21:06.852 ]' 00:21:06.852 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.852 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.852 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.852 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:06.852 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.109 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.109 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.109 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.366 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:21:07.366 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:21:08.298 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.298 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:08.298 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.298 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.298 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.298 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.298 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.298 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.555 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:08.555 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.555 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.555 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:08.555 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:08.555 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.555 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.555 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.555 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.555 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.555 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.555 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.555 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.119 00:21:09.119 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.119 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.119 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.375 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.375 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.375 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.375 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.375 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.375 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.375 { 00:21:09.375 "cntlid": 131, 00:21:09.375 "qid": 0, 00:21:09.375 "state": "enabled", 00:21:09.375 "thread": "nvmf_tgt_poll_group_000", 00:21:09.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:09.375 "listen_address": { 00:21:09.375 "trtype": "TCP", 00:21:09.375 "adrfam": "IPv4", 00:21:09.375 "traddr": "10.0.0.2", 00:21:09.375 "trsvcid": "4420" 00:21:09.375 }, 00:21:09.375 "peer_address": { 00:21:09.375 "trtype": "TCP", 00:21:09.375 "adrfam": "IPv4", 00:21:09.375 "traddr": "10.0.0.1", 00:21:09.375 "trsvcid": "32798" 00:21:09.375 }, 00:21:09.375 "auth": { 00:21:09.375 "state": "completed", 00:21:09.375 "digest": "sha512", 00:21:09.375 "dhgroup": "ffdhe6144" 00:21:09.375 } 00:21:09.375 } 00:21:09.375 ]' 00:21:09.375 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.375 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.375 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.376 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:09.376 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.376 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.376 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.376 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.632 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:21:09.632 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:21:10.561 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.561 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:10.561 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.561 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.561 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.561 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.561 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:10.561 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:10.818 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:10.818 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.818 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.818 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:10.818 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:10.818 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.818 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.818 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.818 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.818 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.818 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.818 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.818 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.383 00:21:11.383 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.383 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.383 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.641 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.641 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.641 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.641 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.641 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.641 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.641 { 00:21:11.641 "cntlid": 133, 00:21:11.641 "qid": 0, 00:21:11.641 "state": "enabled", 00:21:11.641 "thread": "nvmf_tgt_poll_group_000", 00:21:11.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:11.641 "listen_address": { 00:21:11.641 "trtype": "TCP", 00:21:11.641 "adrfam": "IPv4", 00:21:11.641 "traddr": "10.0.0.2", 00:21:11.641 "trsvcid": "4420" 00:21:11.641 }, 00:21:11.641 "peer_address": { 00:21:11.641 "trtype": "TCP", 00:21:11.641 "adrfam": "IPv4", 00:21:11.641 "traddr": "10.0.0.1", 00:21:11.641 "trsvcid": "32822" 00:21:11.641 }, 00:21:11.641 "auth": { 00:21:11.641 "state": "completed", 00:21:11.641 "digest": "sha512", 00:21:11.641 "dhgroup": "ffdhe6144" 00:21:11.641 } 00:21:11.641 } 00:21:11.641 ]' 00:21:11.641 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.641 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.641 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.898 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.898 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.898 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.898 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.898 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.155 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:21:12.155 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:21:13.087 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.087 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:13.087 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.087 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.087 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.087 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.087 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.088 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.346 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:13.346 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.346 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.346 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:13.346 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:13.346 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.346 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:13.346 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.346 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.346 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.346 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:13.346 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.346 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.910 00:21:13.910 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.911 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.911 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.167 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.167 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.167 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.167 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.167 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.167 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.167 { 00:21:14.167 "cntlid": 135, 00:21:14.167 "qid": 0, 00:21:14.167 "state": "enabled", 00:21:14.167 "thread": "nvmf_tgt_poll_group_000", 00:21:14.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:14.167 "listen_address": { 00:21:14.167 "trtype": "TCP", 00:21:14.167 "adrfam": "IPv4", 00:21:14.167 "traddr": "10.0.0.2", 00:21:14.167 "trsvcid": "4420" 00:21:14.167 }, 00:21:14.167 "peer_address": { 00:21:14.167 "trtype": "TCP", 00:21:14.167 "adrfam": "IPv4", 00:21:14.167 "traddr": "10.0.0.1", 00:21:14.167 "trsvcid": "32836" 00:21:14.167 }, 00:21:14.167 "auth": { 00:21:14.167 "state": "completed", 00:21:14.167 "digest": "sha512", 00:21:14.167 "dhgroup": "ffdhe6144" 00:21:14.167 } 00:21:14.167 } 00:21:14.167 ]' 00:21:14.167 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.167 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.167 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.167 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:14.167 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.167 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.167 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.167 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.425 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:21:14.425 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:21:15.377 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.377 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:15.377 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.377 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.377 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.377 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.377 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.377 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:15.377 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:15.720 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:15.720 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.720 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.720 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:15.720 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:15.720 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.720 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.720 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.720 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.720 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.720 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.720 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.720 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.651 00:21:16.651 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.651 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.651 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.907 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.907 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.907 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.907 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.907 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.907 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.907 { 00:21:16.907 "cntlid": 137, 00:21:16.907 "qid": 0, 00:21:16.907 "state": "enabled", 00:21:16.907 "thread": "nvmf_tgt_poll_group_000", 00:21:16.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:16.907 "listen_address": { 00:21:16.907 "trtype": "TCP", 00:21:16.907 "adrfam": "IPv4", 00:21:16.907 "traddr": "10.0.0.2", 00:21:16.907 "trsvcid": "4420" 00:21:16.907 }, 00:21:16.907 "peer_address": { 00:21:16.907 "trtype": "TCP", 00:21:16.907 "adrfam": "IPv4", 00:21:16.907 "traddr": "10.0.0.1", 00:21:16.907 "trsvcid": "54904" 00:21:16.907 }, 00:21:16.907 "auth": { 00:21:16.907 "state": "completed", 00:21:16.907 "digest": "sha512", 00:21:16.907 "dhgroup": "ffdhe8192" 00:21:16.907 } 00:21:16.907 } 00:21:16.907 ]' 00:21:16.907 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.907 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.907 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.907 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:16.907 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.164 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.164 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.164 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.421 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:21:17.421 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:21:18.351 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.351 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:18.351 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.351 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.351 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.351 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.351 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.351 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.608 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:18.608 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.608 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.608 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:18.608 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:18.608 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.608 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.608 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.608 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.608 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.608 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.608 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.608 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.540 00:21:19.540 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.540 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.540 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.797 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.797 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.797 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.797 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.797 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.797 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.797 { 00:21:19.797 "cntlid": 139, 00:21:19.797 "qid": 0, 00:21:19.797 "state": "enabled", 00:21:19.797 "thread": "nvmf_tgt_poll_group_000", 00:21:19.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:19.797 "listen_address": { 00:21:19.797 "trtype": "TCP", 00:21:19.797 "adrfam": "IPv4", 00:21:19.797 "traddr": "10.0.0.2", 00:21:19.797 "trsvcid": "4420" 00:21:19.797 }, 00:21:19.797 "peer_address": { 00:21:19.797 "trtype": "TCP", 00:21:19.797 "adrfam": "IPv4", 00:21:19.797 "traddr": "10.0.0.1", 00:21:19.797 "trsvcid": "54940" 00:21:19.797 }, 00:21:19.797 "auth": { 00:21:19.797 "state": "completed", 00:21:19.797 "digest": "sha512", 00:21:19.797 "dhgroup": "ffdhe8192" 00:21:19.797 } 00:21:19.797 } 00:21:19.797 ]' 00:21:19.797 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.797 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.797 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.797 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.798 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.798 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.798 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.798 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.054 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:21:20.054 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: --dhchap-ctrl-secret DHHC-1:02:YTc2Y2I2ZjRjMjk1OGFhNTU3MWM5ZDVjNTIwNGI2NWU2ZTgwNzE2YmE5NDhkZTk3p8mfSg==: 00:21:20.986 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.986 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:20.986 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.986 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.986 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.986 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.986 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.986 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:21.243 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:21.243 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.243 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.243 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:21.243 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:21.243 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.243 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.243 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.243 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.243 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.243 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.243 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.243 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.175 00:21:22.175 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.175 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.175 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.433 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.433 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.433 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.433 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.433 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.433 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.433 { 00:21:22.433 "cntlid": 141, 00:21:22.433 "qid": 0, 00:21:22.433 "state": "enabled", 00:21:22.433 "thread": "nvmf_tgt_poll_group_000", 00:21:22.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:22.433 "listen_address": { 00:21:22.433 "trtype": "TCP", 00:21:22.433 "adrfam": "IPv4", 00:21:22.433 "traddr": "10.0.0.2", 00:21:22.433 "trsvcid": "4420" 00:21:22.433 }, 00:21:22.433 "peer_address": { 00:21:22.433 "trtype": "TCP", 00:21:22.433 "adrfam": "IPv4", 00:21:22.433 "traddr": "10.0.0.1", 00:21:22.433 "trsvcid": "54980" 00:21:22.433 }, 00:21:22.433 "auth": { 00:21:22.433 "state": "completed", 00:21:22.433 "digest": "sha512", 00:21:22.433 "dhgroup": "ffdhe8192" 00:21:22.433 } 00:21:22.433 } 00:21:22.433 ]' 00:21:22.433 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.433 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.433 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.433 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:22.433 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.433 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.433 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.433 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.690 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:21:22.690 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:01:MmVhNzU2OTgyZTVjNjEzMjk1NGJkODIwZDAzOGQ3YTOf9Y9o: 00:21:23.621 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.621 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:23.622 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.622 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.622 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.622 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.622 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:23.622 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:23.878 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:23.878 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.878 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.878 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:23.878 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:23.878 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.878 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:23.878 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.878 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.878 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.879 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:23.879 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.879 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.806 00:21:24.806 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.806 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.806 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.061 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.061 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.061 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.061 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.061 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.061 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.061 { 00:21:25.061 "cntlid": 143, 00:21:25.061 "qid": 0, 00:21:25.061 "state": "enabled", 00:21:25.061 "thread": "nvmf_tgt_poll_group_000", 00:21:25.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:25.061 "listen_address": { 00:21:25.061 "trtype": "TCP", 00:21:25.061 "adrfam": "IPv4", 00:21:25.061 "traddr": "10.0.0.2", 00:21:25.061 "trsvcid": "4420" 00:21:25.061 }, 00:21:25.061 "peer_address": { 00:21:25.061 "trtype": "TCP", 00:21:25.061 "adrfam": "IPv4", 00:21:25.061 "traddr": "10.0.0.1", 00:21:25.062 "trsvcid": "55012" 00:21:25.062 }, 00:21:25.062 "auth": { 00:21:25.062 "state": "completed", 00:21:25.062 "digest": "sha512", 00:21:25.062 "dhgroup": "ffdhe8192" 00:21:25.062 } 00:21:25.062 } 00:21:25.062 ]' 00:21:25.062 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.318 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.318 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.318 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:25.318 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.318 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.318 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.318 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.575 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:21:25.575 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:21:26.509 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.509 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:26.509 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.509 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.509 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.509 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:26.509 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:26.509 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:26.509 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:26.509 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:26.509 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:26.765 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:26.765 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.765 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.765 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:26.765 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:26.765 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.765 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.765 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.765 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.765 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.765 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.765 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.765 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.695 00:21:27.695 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.695 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.695 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.952 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.952 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.952 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.952 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.952 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.952 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.952 { 00:21:27.952 "cntlid": 145, 00:21:27.952 "qid": 0, 00:21:27.952 "state": "enabled", 00:21:27.952 "thread": "nvmf_tgt_poll_group_000", 00:21:27.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:27.952 "listen_address": { 00:21:27.952 "trtype": "TCP", 00:21:27.952 "adrfam": "IPv4", 00:21:27.952 "traddr": "10.0.0.2", 00:21:27.952 "trsvcid": "4420" 00:21:27.952 }, 00:21:27.952 "peer_address": { 00:21:27.952 "trtype": "TCP", 00:21:27.952 "adrfam": "IPv4", 00:21:27.952 "traddr": "10.0.0.1", 00:21:27.952 "trsvcid": "44936" 00:21:27.952 }, 00:21:27.952 "auth": { 00:21:27.952 "state": "completed", 00:21:27.952 "digest": "sha512", 00:21:27.952 "dhgroup": "ffdhe8192" 00:21:27.952 } 00:21:27.952 } 00:21:27.952 ]' 00:21:27.952 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.952 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.952 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.952 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.952 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.952 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.952 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.952 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.209 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:21:28.209 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YWE2Y2RiM2QzZmQ3MDI3ODI3M2QxNGMzYzIzOTRiZjc5MjI5MGVlZjRmZTYyNTM3hsV10Q==: --dhchap-ctrl-secret DHHC-1:03:YTI2N2QxZDUyZTEzZDM4YjllZWUzMGU1OWRjZDg4NmZlZTQyYTg2YWExMjk4ZDQwYjI2NDFiZmFhODQwYTU3OEzdU/g=: 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:29.140 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:30.071 request: 00:21:30.071 { 00:21:30.071 "name": "nvme0", 00:21:30.071 "trtype": "tcp", 00:21:30.071 "traddr": "10.0.0.2", 00:21:30.071 "adrfam": "ipv4", 00:21:30.071 "trsvcid": "4420", 00:21:30.071 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:30.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:30.071 "prchk_reftag": false, 00:21:30.071 "prchk_guard": false, 00:21:30.071 "hdgst": false, 00:21:30.071 "ddgst": false, 00:21:30.071 "dhchap_key": "key2", 00:21:30.071 "allow_unrecognized_csi": false, 00:21:30.071 "method": "bdev_nvme_attach_controller", 00:21:30.071 "req_id": 1 00:21:30.071 } 00:21:30.071 Got JSON-RPC error response 00:21:30.071 response: 00:21:30.071 { 00:21:30.071 "code": -5, 00:21:30.071 "message": "Input/output error" 00:21:30.071 } 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:30.071 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:31.001 request: 00:21:31.001 { 00:21:31.001 "name": "nvme0", 00:21:31.001 "trtype": "tcp", 00:21:31.001 "traddr": "10.0.0.2", 00:21:31.001 "adrfam": "ipv4", 00:21:31.001 "trsvcid": "4420", 00:21:31.001 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:31.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:31.001 "prchk_reftag": false, 00:21:31.001 "prchk_guard": false, 00:21:31.001 "hdgst": false, 00:21:31.001 "ddgst": false, 00:21:31.001 "dhchap_key": "key1", 00:21:31.001 "dhchap_ctrlr_key": "ckey2", 00:21:31.001 "allow_unrecognized_csi": false, 00:21:31.001 "method": "bdev_nvme_attach_controller", 00:21:31.001 "req_id": 1 00:21:31.001 } 00:21:31.001 Got JSON-RPC error response 00:21:31.001 response: 00:21:31.001 { 00:21:31.001 "code": -5, 00:21:31.001 "message": "Input/output error" 00:21:31.001 } 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.001 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.566 request: 00:21:31.566 { 00:21:31.566 "name": "nvme0", 00:21:31.566 "trtype": "tcp", 00:21:31.566 "traddr": "10.0.0.2", 00:21:31.566 "adrfam": "ipv4", 00:21:31.566 "trsvcid": "4420", 00:21:31.566 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:31.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:31.566 "prchk_reftag": false, 00:21:31.566 "prchk_guard": false, 00:21:31.566 "hdgst": false, 00:21:31.566 "ddgst": false, 00:21:31.566 "dhchap_key": "key1", 00:21:31.566 "dhchap_ctrlr_key": "ckey1", 00:21:31.566 "allow_unrecognized_csi": false, 00:21:31.566 "method": "bdev_nvme_attach_controller", 00:21:31.566 "req_id": 1 00:21:31.566 } 00:21:31.566 Got JSON-RPC error response 00:21:31.566 response: 00:21:31.566 { 00:21:31.566 "code": -5, 00:21:31.566 "message": "Input/output error" 00:21:31.566 } 00:21:31.566 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:31.566 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.566 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.566 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2229804 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2229804 ']' 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2229804 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2229804 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2229804' 00:21:31.824 killing process with pid 2229804 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2229804 00:21:31.824 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2229804 00:21:32.081 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:32.081 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:32.081 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:32.081 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.081 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2252528 00:21:32.081 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:32.081 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2252528 00:21:32.081 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2252528 ']' 00:21:32.081 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.081 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.081 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.081 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.081 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.338 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.338 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:32.338 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:32.338 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.338 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.338 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.338 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:32.338 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2252528 00:21:32.338 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2252528 ']' 00:21:32.338 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.338 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.338 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.338 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.338 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.596 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.596 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:32.596 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:32.596 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.596 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.596 null0 00:21:32.596 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.596 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:32.596 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Yxu 00:21:32.596 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.596 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.596 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.596 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.vv2 ]] 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vv2 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.djv 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.mOn ]] 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mOn 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.JEn 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.IHl ]] 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IHl 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.FXa 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.852 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.223 nvme0n1 00:21:34.223 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.223 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.223 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.480 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.480 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.480 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.480 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.480 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.480 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.480 { 00:21:34.480 "cntlid": 1, 00:21:34.480 "qid": 0, 00:21:34.480 "state": "enabled", 00:21:34.480 "thread": "nvmf_tgt_poll_group_000", 00:21:34.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:34.480 "listen_address": { 00:21:34.480 "trtype": "TCP", 00:21:34.480 "adrfam": "IPv4", 00:21:34.480 "traddr": "10.0.0.2", 00:21:34.480 "trsvcid": "4420" 00:21:34.480 }, 00:21:34.480 "peer_address": { 00:21:34.480 "trtype": "TCP", 00:21:34.480 "adrfam": "IPv4", 00:21:34.480 "traddr": "10.0.0.1", 00:21:34.480 "trsvcid": "45008" 00:21:34.480 }, 00:21:34.480 "auth": { 00:21:34.480 "state": "completed", 00:21:34.480 "digest": "sha512", 00:21:34.480 "dhgroup": "ffdhe8192" 00:21:34.480 } 00:21:34.480 } 00:21:34.480 ]' 00:21:34.480 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.480 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.480 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.480 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.480 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.480 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.480 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.480 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.047 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:21:35.047 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:21:35.613 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.870 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:35.870 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.870 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.870 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.870 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:35.870 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.870 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.870 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.870 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:35.871 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:36.128 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:36.128 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:36.128 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:36.128 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:36.128 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.128 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:36.128 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.129 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.129 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.129 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.387 request: 00:21:36.387 { 00:21:36.387 "name": "nvme0", 00:21:36.387 "trtype": "tcp", 00:21:36.387 "traddr": "10.0.0.2", 00:21:36.387 "adrfam": "ipv4", 00:21:36.387 "trsvcid": "4420", 00:21:36.387 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:36.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:36.387 "prchk_reftag": false, 00:21:36.387 "prchk_guard": false, 00:21:36.387 "hdgst": false, 00:21:36.387 "ddgst": false, 00:21:36.387 "dhchap_key": "key3", 00:21:36.387 "allow_unrecognized_csi": false, 00:21:36.387 "method": "bdev_nvme_attach_controller", 00:21:36.387 "req_id": 1 00:21:36.387 } 00:21:36.387 Got JSON-RPC error response 00:21:36.387 response: 00:21:36.387 { 00:21:36.387 "code": -5, 00:21:36.387 "message": "Input/output error" 00:21:36.387 } 00:21:36.387 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:36.387 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.387 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.387 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.387 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:36.387 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:36.387 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:36.387 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:36.644 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:36.645 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:36.645 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:36.645 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:36.645 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.645 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:36.645 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.645 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.645 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.645 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.903 request: 00:21:36.903 { 00:21:36.903 "name": "nvme0", 00:21:36.903 "trtype": "tcp", 00:21:36.903 "traddr": "10.0.0.2", 00:21:36.903 "adrfam": "ipv4", 00:21:36.903 "trsvcid": "4420", 00:21:36.903 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:36.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:36.903 "prchk_reftag": false, 00:21:36.903 "prchk_guard": false, 00:21:36.903 "hdgst": false, 00:21:36.903 "ddgst": false, 00:21:36.903 "dhchap_key": "key3", 00:21:36.903 "allow_unrecognized_csi": false, 00:21:36.903 "method": "bdev_nvme_attach_controller", 00:21:36.903 "req_id": 1 00:21:36.903 } 00:21:36.903 Got JSON-RPC error response 00:21:36.903 response: 00:21:36.903 { 00:21:36.903 "code": -5, 00:21:36.903 "message": "Input/output error" 00:21:36.903 } 00:21:36.903 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:36.903 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.903 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.903 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.903 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:36.903 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:36.903 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:36.903 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:36.903 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:36.903 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:37.159 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:37.723 request: 00:21:37.723 { 00:21:37.723 "name": "nvme0", 00:21:37.723 "trtype": "tcp", 00:21:37.723 "traddr": "10.0.0.2", 00:21:37.723 "adrfam": "ipv4", 00:21:37.723 "trsvcid": "4420", 00:21:37.723 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:37.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:37.723 "prchk_reftag": false, 00:21:37.723 "prchk_guard": false, 00:21:37.723 "hdgst": false, 00:21:37.723 "ddgst": false, 00:21:37.723 "dhchap_key": "key0", 00:21:37.723 "dhchap_ctrlr_key": "key1", 00:21:37.723 "allow_unrecognized_csi": false, 00:21:37.723 "method": "bdev_nvme_attach_controller", 00:21:37.723 "req_id": 1 00:21:37.723 } 00:21:37.723 Got JSON-RPC error response 00:21:37.723 response: 00:21:37.723 { 00:21:37.723 "code": -5, 00:21:37.723 "message": "Input/output error" 00:21:37.723 } 00:21:37.723 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:37.723 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:37.723 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:37.723 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:37.723 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:37.723 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:37.723 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:37.981 nvme0n1 00:21:37.981 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:37.981 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.981 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:38.238 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.238 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.238 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.495 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:38.495 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.495 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.495 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.495 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:38.495 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:38.495 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:39.865 nvme0n1 00:21:39.865 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:39.865 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:39.865 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.121 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.121 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:40.121 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.121 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.121 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.121 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:40.121 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.121 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:40.378 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.378 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:21:40.378 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: --dhchap-ctrl-secret DHHC-1:03:ODQ3MDY3M2Q1OGYwYjNlZWZkYjE5N2E3NGNiOTNkZDk3MWRlNzdlYmE5N2NmOGI4NjI2ZGRmODc0MDllMTU0OURYbXw=: 00:21:41.380 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:41.380 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:41.380 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:41.380 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:41.380 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:41.380 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:41.380 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:41.380 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.380 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:41.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:41.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:41.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:41.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:41.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:41.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:41.637 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:42.580 request: 00:21:42.580 { 00:21:42.580 "name": "nvme0", 00:21:42.580 "trtype": "tcp", 00:21:42.580 "traddr": "10.0.0.2", 00:21:42.580 "adrfam": "ipv4", 00:21:42.580 "trsvcid": "4420", 00:21:42.580 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:42.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:42.580 "prchk_reftag": false, 00:21:42.580 "prchk_guard": false, 00:21:42.580 "hdgst": false, 00:21:42.580 "ddgst": false, 00:21:42.580 "dhchap_key": "key1", 00:21:42.580 "allow_unrecognized_csi": false, 00:21:42.580 "method": "bdev_nvme_attach_controller", 00:21:42.580 "req_id": 1 00:21:42.580 } 00:21:42.580 Got JSON-RPC error response 00:21:42.580 response: 00:21:42.580 { 00:21:42.580 "code": -5, 00:21:42.580 "message": "Input/output error" 00:21:42.580 } 00:21:42.580 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:42.580 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:42.580 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:42.580 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:42.580 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:42.580 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:42.580 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:43.952 nvme0n1 00:21:43.952 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:43.952 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:43.952 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.209 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.209 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.209 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.466 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:44.466 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.466 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.466 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.466 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:44.466 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:44.466 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:44.723 nvme0n1 00:21:44.723 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:44.723 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:44.723 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.980 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.980 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.980 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.238 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:45.238 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.238 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.238 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.238 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: '' 2s 00:21:45.238 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:45.238 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:45.238 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: 00:21:45.238 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:45.238 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:45.238 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:45.238 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: ]] 00:21:45.238 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjcxM2M5YjA3MWFmNTNjMGZkNWZkZTFmODFkZmI0ZGYDwin3: 00:21:45.494 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:45.494 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:45.494 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:47.387 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: 2s 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: ]] 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:M2Q5YmVjNjNhNGU5N2E4MTAzZTY5ZTE2ODMwMDhlZDBkM2I3MGEyOWU4MTZmNWJiSFi8BQ==: 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:47.388 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:49.284 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:49.284 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:49.284 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:49.284 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:49.542 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:49.542 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:49.542 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:49.542 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.542 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:49.542 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.542 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.542 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.542 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:49.542 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:49.542 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:50.914 nvme0n1 00:21:50.914 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:50.914 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.914 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.914 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.914 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:50.914 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:51.847 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:51.847 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:51.847 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.105 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.105 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:52.105 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.105 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.105 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.105 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:52.105 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:52.364 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:52.364 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:52.364 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.621 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.621 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:52.621 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.621 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.621 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.621 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:52.621 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:52.621 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:52.621 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:52.621 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.621 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:52.621 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.621 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:52.621 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:53.185 request: 00:21:53.185 { 00:21:53.185 "name": "nvme0", 00:21:53.185 "dhchap_key": "key1", 00:21:53.185 "dhchap_ctrlr_key": "key3", 00:21:53.185 "method": "bdev_nvme_set_keys", 00:21:53.185 "req_id": 1 00:21:53.185 } 00:21:53.185 Got JSON-RPC error response 00:21:53.185 response: 00:21:53.185 { 00:21:53.185 "code": -13, 00:21:53.185 "message": "Permission denied" 00:21:53.185 } 00:21:53.185 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:53.185 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:53.185 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:53.185 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:53.185 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:53.185 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:53.185 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.776 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:53.776 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:54.708 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:54.708 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:54.708 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.965 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:54.965 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:54.965 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.965 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.965 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.965 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:54.965 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:54.965 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:56.336 nvme0n1 00:21:56.336 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:56.336 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.336 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.336 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.336 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:56.336 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:56.336 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:56.336 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:56.336 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.336 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:56.336 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.336 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:56.336 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:57.267 request: 00:21:57.267 { 00:21:57.267 "name": "nvme0", 00:21:57.267 "dhchap_key": "key2", 00:21:57.267 "dhchap_ctrlr_key": "key0", 00:21:57.267 "method": "bdev_nvme_set_keys", 00:21:57.267 "req_id": 1 00:21:57.267 } 00:21:57.267 Got JSON-RPC error response 00:21:57.267 response: 00:21:57.267 { 00:21:57.267 "code": -13, 00:21:57.267 "message": "Permission denied" 00:21:57.267 } 00:21:57.267 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:57.267 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.267 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.267 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.267 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:57.267 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.267 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:57.523 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:57.524 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:58.455 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:58.455 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:58.455 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.712 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:58.712 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:58.712 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:58.712 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2229829 00:21:58.712 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2229829 ']' 00:21:58.712 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2229829 00:21:58.712 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:58.712 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.712 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2229829 00:21:58.712 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:58.712 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:58.712 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2229829' 00:21:58.712 killing process with pid 2229829 00:21:58.712 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2229829 00:21:58.712 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2229829 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.278 rmmod nvme_tcp 00:21:59.278 rmmod nvme_fabrics 00:21:59.278 rmmod nvme_keyring 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2252528 ']' 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2252528 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2252528 ']' 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2252528 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2252528 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2252528' 00:21:59.278 killing process with pid 2252528 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2252528 00:21:59.278 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2252528 00:21:59.537 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:59.537 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:59.537 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:59.537 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:59.537 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:59.537 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:59.537 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:59.537 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.537 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:59.537 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.537 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.537 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.438 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.438 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Yxu /tmp/spdk.key-sha256.djv /tmp/spdk.key-sha384.JEn /tmp/spdk.key-sha512.FXa /tmp/spdk.key-sha512.vv2 /tmp/spdk.key-sha384.mOn /tmp/spdk.key-sha256.IHl '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:01.438 00:22:01.438 real 3m31.219s 00:22:01.438 user 8m14.752s 00:22:01.438 sys 0m29.166s 00:22:01.438 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.438 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.438 ************************************ 00:22:01.438 END TEST nvmf_auth_target 00:22:01.438 ************************************ 00:22:01.438 13:53:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:01.438 13:53:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:01.438 13:53:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:01.438 13:53:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.438 13:53:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:01.438 ************************************ 00:22:01.438 START TEST nvmf_bdevio_no_huge 00:22:01.438 ************************************ 00:22:01.438 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:01.697 * Looking for test storage... 00:22:01.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:01.697 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:01.697 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:01.697 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:01.697 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:01.697 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.697 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.697 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.697 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.697 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.697 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.697 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:01.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.698 --rc genhtml_branch_coverage=1 00:22:01.698 --rc genhtml_function_coverage=1 00:22:01.698 --rc genhtml_legend=1 00:22:01.698 --rc geninfo_all_blocks=1 00:22:01.698 --rc geninfo_unexecuted_blocks=1 00:22:01.698 00:22:01.698 ' 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:01.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.698 --rc genhtml_branch_coverage=1 00:22:01.698 --rc genhtml_function_coverage=1 00:22:01.698 --rc genhtml_legend=1 00:22:01.698 --rc geninfo_all_blocks=1 00:22:01.698 --rc geninfo_unexecuted_blocks=1 00:22:01.698 00:22:01.698 ' 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:01.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.698 --rc genhtml_branch_coverage=1 00:22:01.698 --rc genhtml_function_coverage=1 00:22:01.698 --rc genhtml_legend=1 00:22:01.698 --rc geninfo_all_blocks=1 00:22:01.698 --rc geninfo_unexecuted_blocks=1 00:22:01.698 00:22:01.698 ' 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:01.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.698 --rc genhtml_branch_coverage=1 00:22:01.698 --rc genhtml_function_coverage=1 00:22:01.698 --rc genhtml_legend=1 00:22:01.698 --rc geninfo_all_blocks=1 00:22:01.698 --rc geninfo_unexecuted_blocks=1 00:22:01.698 00:22:01.698 ' 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:01.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.698 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.699 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.699 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.699 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.699 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:04.245 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:04.245 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:04.245 Found net devices under 0000:09:00.0: cvl_0_0 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:04.245 Found net devices under 0000:09:00.1: cvl_0_1 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:04.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:22:04.245 00:22:04.245 --- 10.0.0.2 ping statistics --- 00:22:04.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.245 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:22:04.245 00:22:04.245 --- 10.0.0.1 ping statistics --- 00:22:04.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.245 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2257783 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2257783 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2257783 ']' 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.245 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:04.246 [2024-12-05 13:53:35.438842] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:22:04.246 [2024-12-05 13:53:35.438938] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:04.246 [2024-12-05 13:53:35.521354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.246 [2024-12-05 13:53:35.582608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.246 [2024-12-05 13:53:35.582665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.246 [2024-12-05 13:53:35.582694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.246 [2024-12-05 13:53:35.582706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.246 [2024-12-05 13:53:35.582716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.246 [2024-12-05 13:53:35.583836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:04.246 [2024-12-05 13:53:35.583909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:04.246 [2024-12-05 13:53:35.583939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:04.246 [2024-12-05 13:53:35.583941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:04.246 [2024-12-05 13:53:35.742535] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:04.246 Malloc0 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.246 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:04.503 [2024-12-05 13:53:35.780723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.503 { 00:22:04.503 "params": { 00:22:04.503 "name": "Nvme$subsystem", 00:22:04.503 "trtype": "$TEST_TRANSPORT", 00:22:04.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.503 "adrfam": "ipv4", 00:22:04.503 "trsvcid": "$NVMF_PORT", 00:22:04.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.503 "hdgst": ${hdgst:-false}, 00:22:04.503 "ddgst": ${ddgst:-false} 00:22:04.503 }, 00:22:04.503 "method": "bdev_nvme_attach_controller" 00:22:04.503 } 00:22:04.503 EOF 00:22:04.503 )") 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:04.503 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:04.503 "params": { 00:22:04.503 "name": "Nvme1", 00:22:04.503 "trtype": "tcp", 00:22:04.503 "traddr": "10.0.0.2", 00:22:04.503 "adrfam": "ipv4", 00:22:04.503 "trsvcid": "4420", 00:22:04.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.503 "hdgst": false, 00:22:04.503 "ddgst": false 00:22:04.503 }, 00:22:04.503 "method": "bdev_nvme_attach_controller" 00:22:04.503 }' 00:22:04.503 [2024-12-05 13:53:35.835208] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:22:04.503 [2024-12-05 13:53:35.835291] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2257853 ] 00:22:04.503 [2024-12-05 13:53:35.911274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:04.503 [2024-12-05 13:53:35.976004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.503 [2024-12-05 13:53:35.976054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.503 [2024-12-05 13:53:35.976058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.068 I/O targets: 00:22:05.068 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:05.068 00:22:05.068 00:22:05.068 CUnit - A unit testing framework for C - Version 2.1-3 00:22:05.068 http://cunit.sourceforge.net/ 00:22:05.068 00:22:05.068 00:22:05.068 Suite: bdevio tests on: Nvme1n1 00:22:05.068 Test: blockdev write read block ...passed 00:22:05.068 Test: blockdev write zeroes read block ...passed 00:22:05.068 Test: blockdev write zeroes read no split ...passed 00:22:05.068 Test: blockdev write zeroes read split ...passed 00:22:05.068 Test: blockdev write zeroes read split partial ...passed 00:22:05.068 Test: blockdev reset ...[2024-12-05 13:53:36.404370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:05.068 [2024-12-05 13:53:36.404487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198f6a0 (9): Bad file descriptor 00:22:05.068 [2024-12-05 13:53:36.424995] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:05.068 passed 00:22:05.068 Test: blockdev write read 8 blocks ...passed 00:22:05.068 Test: blockdev write read size > 128k ...passed 00:22:05.068 Test: blockdev write read invalid size ...passed 00:22:05.068 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:05.068 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:05.068 Test: blockdev write read max offset ...passed 00:22:05.324 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:05.324 Test: blockdev writev readv 8 blocks ...passed 00:22:05.324 Test: blockdev writev readv 30 x 1block ...passed 00:22:05.324 Test: blockdev writev readv block ...passed 00:22:05.324 Test: blockdev writev readv size > 128k ...passed 00:22:05.324 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:05.324 Test: blockdev comparev and writev ...[2024-12-05 13:53:36.678401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.324 [2024-12-05 13:53:36.678444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.324 [2024-12-05 13:53:36.678469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.324 [2024-12-05 13:53:36.678487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:05.324 [2024-12-05 13:53:36.678805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.324 [2024-12-05 13:53:36.678830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:05.324 [2024-12-05 13:53:36.678852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.324 [2024-12-05 13:53:36.678868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:05.324 [2024-12-05 13:53:36.679240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.324 [2024-12-05 13:53:36.679264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:05.324 [2024-12-05 13:53:36.679286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.324 [2024-12-05 13:53:36.679302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:05.325 [2024-12-05 13:53:36.679673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.325 [2024-12-05 13:53:36.679697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:05.325 [2024-12-05 13:53:36.679719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.325 [2024-12-05 13:53:36.679735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:05.325 passed 00:22:05.325 Test: blockdev nvme passthru rw ...passed 00:22:05.325 Test: blockdev nvme passthru vendor specific ...[2024-12-05 13:53:36.762682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:05.325 [2024-12-05 13:53:36.762709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:05.325 [2024-12-05 13:53:36.762843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:05.325 [2024-12-05 13:53:36.762867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:05.325 [2024-12-05 13:53:36.763004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:05.325 [2024-12-05 13:53:36.763033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:05.325 [2024-12-05 13:53:36.763163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:05.325 [2024-12-05 13:53:36.763186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:05.325 passed 00:22:05.325 Test: blockdev nvme admin passthru ...passed 00:22:05.325 Test: blockdev copy ...passed 00:22:05.325 00:22:05.325 Run Summary: Type Total Ran Passed Failed Inactive 00:22:05.325 suites 1 1 n/a 0 0 00:22:05.325 tests 23 23 23 0 0 00:22:05.325 asserts 152 152 152 0 n/a 00:22:05.325 00:22:05.325 Elapsed time = 1.063 seconds 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:05.888 rmmod nvme_tcp 00:22:05.888 rmmod nvme_fabrics 00:22:05.888 rmmod nvme_keyring 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:05.888 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:05.889 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2257783 ']' 00:22:05.889 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2257783 00:22:05.889 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2257783 ']' 00:22:05.889 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2257783 00:22:05.889 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:05.889 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.889 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2257783 00:22:05.889 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:05.889 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:05.889 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2257783' 00:22:05.889 killing process with pid 2257783 00:22:05.889 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2257783 00:22:05.889 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2257783 00:22:06.148 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:06.148 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:06.148 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:06.148 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:06.148 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:06.148 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:06.148 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:06.148 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:06.148 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:06.148 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.148 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.148 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:08.685 00:22:08.685 real 0m6.778s 00:22:08.685 user 0m11.248s 00:22:08.685 sys 0m2.665s 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:08.685 ************************************ 00:22:08.685 END TEST nvmf_bdevio_no_huge 00:22:08.685 ************************************ 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:08.685 ************************************ 00:22:08.685 START TEST nvmf_tls 00:22:08.685 ************************************ 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:08.685 * Looking for test storage... 00:22:08.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:08.685 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.686 --rc genhtml_branch_coverage=1 00:22:08.686 --rc genhtml_function_coverage=1 00:22:08.686 --rc genhtml_legend=1 00:22:08.686 --rc geninfo_all_blocks=1 00:22:08.686 --rc geninfo_unexecuted_blocks=1 00:22:08.686 00:22:08.686 ' 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.686 --rc genhtml_branch_coverage=1 00:22:08.686 --rc genhtml_function_coverage=1 00:22:08.686 --rc genhtml_legend=1 00:22:08.686 --rc geninfo_all_blocks=1 00:22:08.686 --rc geninfo_unexecuted_blocks=1 00:22:08.686 00:22:08.686 ' 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.686 --rc genhtml_branch_coverage=1 00:22:08.686 --rc genhtml_function_coverage=1 00:22:08.686 --rc genhtml_legend=1 00:22:08.686 --rc geninfo_all_blocks=1 00:22:08.686 --rc geninfo_unexecuted_blocks=1 00:22:08.686 00:22:08.686 ' 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.686 --rc genhtml_branch_coverage=1 00:22:08.686 --rc genhtml_function_coverage=1 00:22:08.686 --rc genhtml_legend=1 00:22:08.686 --rc geninfo_all_blocks=1 00:22:08.686 --rc geninfo_unexecuted_blocks=1 00:22:08.686 00:22:08.686 ' 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:08.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:08.686 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:10.666 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:10.667 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:10.667 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:10.667 Found net devices under 0000:09:00.0: cvl_0_0 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:10.667 Found net devices under 0000:09:00.1: cvl_0_1 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:10.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:22:10.667 00:22:10.667 --- 10.0.0.2 ping statistics --- 00:22:10.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.667 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:22:10.667 00:22:10.667 --- 10.0.0.1 ping statistics --- 00:22:10.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.667 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:10.667 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:10.926 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:10.926 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:10.926 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.926 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.926 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2260015 00:22:10.926 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:10.926 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2260015 00:22:10.926 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2260015 ']' 00:22:10.926 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.926 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.926 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.926 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.926 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.926 [2024-12-05 13:53:42.262629] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:22:10.926 [2024-12-05 13:53:42.262724] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.926 [2024-12-05 13:53:42.340559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.926 [2024-12-05 13:53:42.396832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.926 [2024-12-05 13:53:42.396887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.926 [2024-12-05 13:53:42.396916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.926 [2024-12-05 13:53:42.396928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.926 [2024-12-05 13:53:42.396938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.926 [2024-12-05 13:53:42.397587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.183 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.183 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:11.183 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:11.183 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:11.183 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.183 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.183 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:11.183 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:11.440 true 00:22:11.440 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:11.440 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:11.697 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:11.697 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:11.697 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:11.956 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:11.956 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:12.213 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:12.213 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:12.213 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:12.471 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:12.471 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:12.729 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:12.729 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:12.729 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:12.729 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:12.986 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:12.986 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:12.986 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:13.549 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:13.549 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:13.806 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:13.806 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:13.806 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:14.063 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:14.063 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:14.320 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Dp5cFpVpXP 00:22:14.321 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:14.321 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.haZ6ePOZ46 00:22:14.321 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:14.321 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:14.321 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Dp5cFpVpXP 00:22:14.321 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.haZ6ePOZ46 00:22:14.321 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:14.578 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:15.144 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Dp5cFpVpXP 00:22:15.144 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Dp5cFpVpXP 00:22:15.144 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:15.402 [2024-12-05 13:53:46.690662] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.402 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:15.661 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:15.919 [2024-12-05 13:53:47.228084] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:15.919 [2024-12-05 13:53:47.228351] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.919 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:16.177 malloc0 00:22:16.177 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:16.434 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Dp5cFpVpXP 00:22:16.693 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:16.950 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Dp5cFpVpXP 00:22:29.140 Initializing NVMe Controllers 00:22:29.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:29.140 Initialization complete. Launching workers. 00:22:29.140 ======================================================== 00:22:29.140 Latency(us) 00:22:29.140 Device Information : IOPS MiB/s Average min max 00:22:29.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8607.83 33.62 7437.15 1010.21 8664.67 00:22:29.140 ======================================================== 00:22:29.140 Total : 8607.83 33.62 7437.15 1010.21 8664.67 00:22:29.140 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dp5cFpVpXP 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Dp5cFpVpXP 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2261976 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2261976 /var/tmp/bdevperf.sock 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2261976 ']' 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.140 [2024-12-05 13:53:58.598559] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:22:29.140 [2024-12-05 13:53:58.598647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261976 ] 00:22:29.140 [2024-12-05 13:53:58.664194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.140 [2024-12-05 13:53:58.718448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:29.140 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dp5cFpVpXP 00:22:29.140 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:29.140 [2024-12-05 13:53:59.406583] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:29.140 TLSTESTn1 00:22:29.140 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:29.140 Running I/O for 10 seconds... 00:22:30.510 3304.00 IOPS, 12.91 MiB/s [2024-12-05T12:54:02.969Z] 3364.00 IOPS, 13.14 MiB/s [2024-12-05T12:54:03.901Z] 3402.67 IOPS, 13.29 MiB/s [2024-12-05T12:54:04.835Z] 3408.50 IOPS, 13.31 MiB/s [2024-12-05T12:54:05.765Z] 3428.80 IOPS, 13.39 MiB/s [2024-12-05T12:54:06.696Z] 3436.67 IOPS, 13.42 MiB/s [2024-12-05T12:54:07.628Z] 3442.29 IOPS, 13.45 MiB/s [2024-12-05T12:54:08.998Z] 3445.12 IOPS, 13.46 MiB/s [2024-12-05T12:54:09.929Z] 3441.89 IOPS, 13.44 MiB/s [2024-12-05T12:54:09.929Z] 3447.20 IOPS, 13.47 MiB/s 00:22:38.403 Latency(us) 00:22:38.403 [2024-12-05T12:54:09.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.403 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:38.403 Verification LBA range: start 0x0 length 0x2000 00:22:38.403 TLSTESTn1 : 10.02 3453.87 13.49 0.00 0.00 37001.13 6456.51 33010.73 00:22:38.403 [2024-12-05T12:54:09.929Z] =================================================================================================================== 00:22:38.403 [2024-12-05T12:54:09.929Z] Total : 3453.87 13.49 0.00 0.00 37001.13 6456.51 33010.73 00:22:38.403 { 00:22:38.403 "results": [ 00:22:38.403 { 00:22:38.403 "job": "TLSTESTn1", 00:22:38.403 "core_mask": "0x4", 00:22:38.403 "workload": "verify", 00:22:38.403 "status": "finished", 00:22:38.403 "verify_range": { 00:22:38.403 "start": 0, 00:22:38.403 "length": 8192 00:22:38.403 }, 00:22:38.403 "queue_depth": 128, 00:22:38.403 "io_size": 4096, 00:22:38.403 "runtime": 10.01774, 00:22:38.403 "iops": 3453.872829600289, 00:22:38.403 "mibps": 13.49169074062613, 00:22:38.403 "io_failed": 0, 00:22:38.403 "io_timeout": 0, 00:22:38.403 "avg_latency_us": 37001.12611774781, 00:22:38.403 "min_latency_us": 6456.50962962963, 00:22:38.403 "max_latency_us": 33010.72592592592 00:22:38.403 } 00:22:38.403 ], 00:22:38.403 "core_count": 1 00:22:38.403 } 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2261976 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2261976 ']' 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2261976 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2261976 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2261976' 00:22:38.403 killing process with pid 2261976 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2261976 00:22:38.403 Received shutdown signal, test time was about 10.000000 seconds 00:22:38.403 00:22:38.403 Latency(us) 00:22:38.403 [2024-12-05T12:54:09.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.403 [2024-12-05T12:54:09.929Z] =================================================================================================================== 00:22:38.403 [2024-12-05T12:54:09.929Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2261976 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.haZ6ePOZ46 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.haZ6ePOZ46 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.403 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.haZ6ePOZ46 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.haZ6ePOZ46 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2263863 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2263863 /var/tmp/bdevperf.sock 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2263863 ']' 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.404 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.661 [2024-12-05 13:54:09.934663] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:22:38.661 [2024-12-05 13:54:09.934762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2263863 ] 00:22:38.661 [2024-12-05 13:54:10.003605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.661 [2024-12-05 13:54:10.066428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.661 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.661 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:38.662 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.haZ6ePOZ46 00:22:39.226 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:39.226 [2024-12-05 13:54:10.687376] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.226 [2024-12-05 13:54:10.697746] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:39.226 [2024-12-05 13:54:10.698503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d42f0 (107): Transport endpoint is not connected 00:22:39.226 [2024-12-05 13:54:10.699492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d42f0 (9): Bad file descriptor 00:22:39.226 [2024-12-05 13:54:10.700492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:39.226 [2024-12-05 13:54:10.700522] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:39.226 [2024-12-05 13:54:10.700537] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:39.226 [2024-12-05 13:54:10.700552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:39.226 request: 00:22:39.226 { 00:22:39.226 "name": "TLSTEST", 00:22:39.226 "trtype": "tcp", 00:22:39.226 "traddr": "10.0.0.2", 00:22:39.226 "adrfam": "ipv4", 00:22:39.226 "trsvcid": "4420", 00:22:39.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:39.226 "prchk_reftag": false, 00:22:39.226 "prchk_guard": false, 00:22:39.226 "hdgst": false, 00:22:39.226 "ddgst": false, 00:22:39.226 "psk": "key0", 00:22:39.226 "allow_unrecognized_csi": false, 00:22:39.226 "method": "bdev_nvme_attach_controller", 00:22:39.226 "req_id": 1 00:22:39.226 } 00:22:39.226 Got JSON-RPC error response 00:22:39.226 response: 00:22:39.226 { 00:22:39.226 "code": -5, 00:22:39.226 "message": "Input/output error" 00:22:39.226 } 00:22:39.226 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2263863 00:22:39.226 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2263863 ']' 00:22:39.226 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2263863 00:22:39.226 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:39.226 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.226 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2263863 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2263863' 00:22:39.483 killing process with pid 2263863 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2263863 00:22:39.483 Received shutdown signal, test time was about 10.000000 seconds 00:22:39.483 00:22:39.483 Latency(us) 00:22:39.483 [2024-12-05T12:54:11.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.483 [2024-12-05T12:54:11.009Z] =================================================================================================================== 00:22:39.483 [2024-12-05T12:54:11.009Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2263863 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Dp5cFpVpXP 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Dp5cFpVpXP 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Dp5cFpVpXP 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Dp5cFpVpXP 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2264000 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2264000 /var/tmp/bdevperf.sock 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2264000 ']' 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.483 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.483 [2024-12-05 13:54:10.995954] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:22:39.483 [2024-12-05 13:54:10.996036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2264000 ] 00:22:39.741 [2024-12-05 13:54:11.070243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.741 [2024-12-05 13:54:11.129648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.741 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.741 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:39.741 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dp5cFpVpXP 00:22:39.998 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:40.255 [2024-12-05 13:54:11.741812] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:40.255 [2024-12-05 13:54:11.750295] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:40.255 [2024-12-05 13:54:11.750324] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:40.255 [2024-12-05 13:54:11.750376] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:40.255 [2024-12-05 13:54:11.750913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173d2f0 (107): Transport endpoint is not connected 00:22:40.255 [2024-12-05 13:54:11.751899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173d2f0 (9): Bad file descriptor 00:22:40.255 [2024-12-05 13:54:11.752898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:40.255 [2024-12-05 13:54:11.752922] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:40.255 [2024-12-05 13:54:11.752950] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:40.255 [2024-12-05 13:54:11.752965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:40.255 request: 00:22:40.255 { 00:22:40.255 "name": "TLSTEST", 00:22:40.255 "trtype": "tcp", 00:22:40.255 "traddr": "10.0.0.2", 00:22:40.255 "adrfam": "ipv4", 00:22:40.255 "trsvcid": "4420", 00:22:40.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.255 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:40.255 "prchk_reftag": false, 00:22:40.255 "prchk_guard": false, 00:22:40.255 "hdgst": false, 00:22:40.255 "ddgst": false, 00:22:40.255 "psk": "key0", 00:22:40.255 "allow_unrecognized_csi": false, 00:22:40.255 "method": "bdev_nvme_attach_controller", 00:22:40.255 "req_id": 1 00:22:40.255 } 00:22:40.255 Got JSON-RPC error response 00:22:40.255 response: 00:22:40.255 { 00:22:40.255 "code": -5, 00:22:40.255 "message": "Input/output error" 00:22:40.255 } 00:22:40.255 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2264000 00:22:40.255 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2264000 ']' 00:22:40.255 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2264000 00:22:40.255 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:40.255 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.255 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2264000 00:22:40.512 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:40.512 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:40.512 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2264000' 00:22:40.512 killing process with pid 2264000 00:22:40.512 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2264000 00:22:40.512 Received shutdown signal, test time was about 10.000000 seconds 00:22:40.512 00:22:40.512 Latency(us) 00:22:40.512 [2024-12-05T12:54:12.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.512 [2024-12-05T12:54:12.038Z] =================================================================================================================== 00:22:40.512 [2024-12-05T12:54:12.038Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:40.512 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2264000 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dp5cFpVpXP 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dp5cFpVpXP 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dp5cFpVpXP 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Dp5cFpVpXP 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:40.512 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2264137 00:22:40.513 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:40.513 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:40.513 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2264137 /var/tmp/bdevperf.sock 00:22:40.513 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2264137 ']' 00:22:40.513 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.513 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.513 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.513 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.513 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.770 [2024-12-05 13:54:12.063537] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:22:40.770 [2024-12-05 13:54:12.063625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2264137 ] 00:22:40.770 [2024-12-05 13:54:12.132941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.770 [2024-12-05 13:54:12.191862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.027 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.027 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:41.027 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dp5cFpVpXP 00:22:41.284 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:41.542 [2024-12-05 13:54:12.826199] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:41.542 [2024-12-05 13:54:12.833073] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:41.542 [2024-12-05 13:54:12.833100] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:41.542 [2024-12-05 13:54:12.833163] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:41.542 [2024-12-05 13:54:12.833201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7462f0 (107): Transport endpoint is not connected 00:22:41.542 [2024-12-05 13:54:12.834174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7462f0 (9): Bad file descriptor 00:22:41.542 [2024-12-05 13:54:12.835173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:41.542 [2024-12-05 13:54:12.835193] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:41.542 [2024-12-05 13:54:12.835221] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:41.542 [2024-12-05 13:54:12.835235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:41.542 request: 00:22:41.542 { 00:22:41.542 "name": "TLSTEST", 00:22:41.542 "trtype": "tcp", 00:22:41.542 "traddr": "10.0.0.2", 00:22:41.542 "adrfam": "ipv4", 00:22:41.542 "trsvcid": "4420", 00:22:41.542 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:41.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:41.542 "prchk_reftag": false, 00:22:41.542 "prchk_guard": false, 00:22:41.542 "hdgst": false, 00:22:41.542 "ddgst": false, 00:22:41.542 "psk": "key0", 00:22:41.542 "allow_unrecognized_csi": false, 00:22:41.542 "method": "bdev_nvme_attach_controller", 00:22:41.542 "req_id": 1 00:22:41.542 } 00:22:41.542 Got JSON-RPC error response 00:22:41.542 response: 00:22:41.542 { 00:22:41.542 "code": -5, 00:22:41.542 "message": "Input/output error" 00:22:41.542 } 00:22:41.542 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2264137 00:22:41.542 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2264137 ']' 00:22:41.542 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2264137 00:22:41.542 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:41.542 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.542 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2264137 00:22:41.542 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:41.542 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:41.542 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2264137' 00:22:41.542 killing process with pid 2264137 00:22:41.542 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2264137 00:22:41.542 Received shutdown signal, test time was about 10.000000 seconds 00:22:41.542 00:22:41.542 Latency(us) 00:22:41.542 [2024-12-05T12:54:13.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.542 [2024-12-05T12:54:13.068Z] =================================================================================================================== 00:22:41.542 [2024-12-05T12:54:13.068Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:41.542 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2264137 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2264282 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2264282 /var/tmp/bdevperf.sock 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2264282 ']' 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.801 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.801 [2024-12-05 13:54:13.149845] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:22:41.801 [2024-12-05 13:54:13.149923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2264282 ] 00:22:41.801 [2024-12-05 13:54:13.217895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.801 [2024-12-05 13:54:13.278154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.059 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.059 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:42.059 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:42.316 [2024-12-05 13:54:13.631333] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:42.316 [2024-12-05 13:54:13.631378] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:42.316 request: 00:22:42.316 { 00:22:42.316 "name": "key0", 00:22:42.316 "path": "", 00:22:42.316 "method": "keyring_file_add_key", 00:22:42.316 "req_id": 1 00:22:42.316 } 00:22:42.316 Got JSON-RPC error response 00:22:42.316 response: 00:22:42.316 { 00:22:42.316 "code": -1, 00:22:42.316 "message": "Operation not permitted" 00:22:42.316 } 00:22:42.316 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:42.573 [2024-12-05 13:54:13.892167] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.573 [2024-12-05 13:54:13.892231] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:42.573 request: 00:22:42.573 { 00:22:42.573 "name": "TLSTEST", 00:22:42.573 "trtype": "tcp", 00:22:42.573 "traddr": "10.0.0.2", 00:22:42.573 "adrfam": "ipv4", 00:22:42.573 "trsvcid": "4420", 00:22:42.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:42.573 "prchk_reftag": false, 00:22:42.573 "prchk_guard": false, 00:22:42.573 "hdgst": false, 00:22:42.573 "ddgst": false, 00:22:42.573 "psk": "key0", 00:22:42.573 "allow_unrecognized_csi": false, 00:22:42.573 "method": "bdev_nvme_attach_controller", 00:22:42.573 "req_id": 1 00:22:42.573 } 00:22:42.573 Got JSON-RPC error response 00:22:42.573 response: 00:22:42.573 { 00:22:42.573 "code": -126, 00:22:42.573 "message": "Required key not available" 00:22:42.573 } 00:22:42.573 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2264282 00:22:42.573 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2264282 ']' 00:22:42.573 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2264282 00:22:42.573 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:42.573 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.573 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2264282 00:22:42.573 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:42.573 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:42.573 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2264282' 00:22:42.573 killing process with pid 2264282 00:22:42.573 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2264282 00:22:42.573 Received shutdown signal, test time was about 10.000000 seconds 00:22:42.573 00:22:42.573 Latency(us) 00:22:42.573 [2024-12-05T12:54:14.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.573 [2024-12-05T12:54:14.099Z] =================================================================================================================== 00:22:42.573 [2024-12-05T12:54:14.099Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:42.573 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2264282 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2260015 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2260015 ']' 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2260015 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2260015 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2260015' 00:22:42.831 killing process with pid 2260015 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2260015 00:22:42.831 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2260015 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.9mE94VFsKi 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.9mE94VFsKi 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2264450 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2264450 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2264450 ']' 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.128 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.128 [2024-12-05 13:54:14.485153] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:22:43.128 [2024-12-05 13:54:14.485244] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.128 [2024-12-05 13:54:14.557608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.128 [2024-12-05 13:54:14.615517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.128 [2024-12-05 13:54:14.615572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.128 [2024-12-05 13:54:14.615588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.128 [2024-12-05 13:54:14.615600] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.128 [2024-12-05 13:54:14.615610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.128 [2024-12-05 13:54:14.616235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.456 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.457 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:43.457 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:43.457 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:43.457 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.457 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.457 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.9mE94VFsKi 00:22:43.457 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9mE94VFsKi 00:22:43.457 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:43.713 [2024-12-05 13:54:15.012791] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.713 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:43.970 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:44.228 [2024-12-05 13:54:15.546252] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.228 [2024-12-05 13:54:15.546512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.228 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:44.485 malloc0 00:22:44.485 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:44.742 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9mE94VFsKi 00:22:44.999 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9mE94VFsKi 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9mE94VFsKi 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2264727 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2264727 /var/tmp/bdevperf.sock 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2264727 ']' 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.256 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.256 [2024-12-05 13:54:16.685801] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:22:45.256 [2024-12-05 13:54:16.685891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2264727 ] 00:22:45.256 [2024-12-05 13:54:16.753995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.513 [2024-12-05 13:54:16.810243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.513 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.513 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:45.513 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9mE94VFsKi 00:22:45.770 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:46.026 [2024-12-05 13:54:17.436927] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.026 TLSTESTn1 00:22:46.026 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:46.284 Running I/O for 10 seconds... 00:22:48.153 3355.00 IOPS, 13.11 MiB/s [2024-12-05T12:54:21.051Z] 3381.50 IOPS, 13.21 MiB/s [2024-12-05T12:54:21.983Z] 3415.67 IOPS, 13.34 MiB/s [2024-12-05T12:54:22.916Z] 3432.25 IOPS, 13.41 MiB/s [2024-12-05T12:54:23.848Z] 3432.00 IOPS, 13.41 MiB/s [2024-12-05T12:54:24.809Z] 3423.17 IOPS, 13.37 MiB/s [2024-12-05T12:54:25.738Z] 3417.29 IOPS, 13.35 MiB/s [2024-12-05T12:54:26.669Z] 3422.38 IOPS, 13.37 MiB/s [2024-12-05T12:54:28.039Z] 3437.89 IOPS, 13.43 MiB/s [2024-12-05T12:54:28.039Z] 3440.40 IOPS, 13.44 MiB/s 00:22:56.513 Latency(us) 00:22:56.513 [2024-12-05T12:54:28.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.513 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:56.513 Verification LBA range: start 0x0 length 0x2000 00:22:56.513 TLSTESTn1 : 10.02 3445.29 13.46 0.00 0.00 37084.37 6505.05 46020.84 00:22:56.513 [2024-12-05T12:54:28.039Z] =================================================================================================================== 00:22:56.513 [2024-12-05T12:54:28.040Z] Total : 3445.29 13.46 0.00 0.00 37084.37 6505.05 46020.84 00:22:56.514 { 00:22:56.514 "results": [ 00:22:56.514 { 00:22:56.514 "job": "TLSTESTn1", 00:22:56.514 "core_mask": "0x4", 00:22:56.514 "workload": "verify", 00:22:56.514 "status": "finished", 00:22:56.514 "verify_range": { 00:22:56.514 "start": 0, 00:22:56.514 "length": 8192 00:22:56.514 }, 00:22:56.514 "queue_depth": 128, 00:22:56.514 "io_size": 4096, 00:22:56.514 "runtime": 10.02238, 00:22:56.514 "iops": 3445.289442228293, 00:22:56.514 "mibps": 13.45816188370427, 00:22:56.514 "io_failed": 0, 00:22:56.514 "io_timeout": 0, 00:22:56.514 "avg_latency_us": 37084.37412277032, 00:22:56.514 "min_latency_us": 6505.054814814815, 00:22:56.514 "max_latency_us": 46020.83555555555 00:22:56.514 } 00:22:56.514 ], 00:22:56.514 "core_count": 1 00:22:56.514 } 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2264727 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2264727 ']' 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2264727 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2264727 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2264727' 00:22:56.514 killing process with pid 2264727 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2264727 00:22:56.514 Received shutdown signal, test time was about 10.000000 seconds 00:22:56.514 00:22:56.514 Latency(us) 00:22:56.514 [2024-12-05T12:54:28.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.514 [2024-12-05T12:54:28.040Z] =================================================================================================================== 00:22:56.514 [2024-12-05T12:54:28.040Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2264727 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.9mE94VFsKi 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9mE94VFsKi 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9mE94VFsKi 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9mE94VFsKi 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9mE94VFsKi 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2266043 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2266043 /var/tmp/bdevperf.sock 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2266043 ']' 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.514 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.514 [2024-12-05 13:54:27.988688] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:22:56.514 [2024-12-05 13:54:27.988802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266043 ] 00:22:56.772 [2024-12-05 13:54:28.059271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.772 [2024-12-05 13:54:28.116233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.772 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.772 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:56.772 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9mE94VFsKi 00:22:57.029 [2024-12-05 13:54:28.471768] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.9mE94VFsKi': 0100666 00:22:57.029 [2024-12-05 13:54:28.471805] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:57.029 request: 00:22:57.029 { 00:22:57.029 "name": "key0", 00:22:57.029 "path": "/tmp/tmp.9mE94VFsKi", 00:22:57.029 "method": "keyring_file_add_key", 00:22:57.029 "req_id": 1 00:22:57.029 } 00:22:57.029 Got JSON-RPC error response 00:22:57.029 response: 00:22:57.029 { 00:22:57.029 "code": -1, 00:22:57.029 "message": "Operation not permitted" 00:22:57.029 } 00:22:57.029 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:57.287 [2024-12-05 13:54:28.736583] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:57.287 [2024-12-05 13:54:28.736650] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:57.287 request: 00:22:57.287 { 00:22:57.287 "name": "TLSTEST", 00:22:57.287 "trtype": "tcp", 00:22:57.287 "traddr": "10.0.0.2", 00:22:57.287 "adrfam": "ipv4", 00:22:57.287 "trsvcid": "4420", 00:22:57.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.287 "prchk_reftag": false, 00:22:57.287 "prchk_guard": false, 00:22:57.287 "hdgst": false, 00:22:57.287 "ddgst": false, 00:22:57.287 "psk": "key0", 00:22:57.287 "allow_unrecognized_csi": false, 00:22:57.287 "method": "bdev_nvme_attach_controller", 00:22:57.287 "req_id": 1 00:22:57.287 } 00:22:57.287 Got JSON-RPC error response 00:22:57.287 response: 00:22:57.287 { 00:22:57.287 "code": -126, 00:22:57.287 "message": "Required key not available" 00:22:57.287 } 00:22:57.287 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2266043 00:22:57.287 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2266043 ']' 00:22:57.287 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2266043 00:22:57.287 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:57.287 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.287 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2266043 00:22:57.287 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:57.287 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:57.287 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2266043' 00:22:57.287 killing process with pid 2266043 00:22:57.287 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2266043 00:22:57.287 Received shutdown signal, test time was about 10.000000 seconds 00:22:57.287 00:22:57.287 Latency(us) 00:22:57.287 [2024-12-05T12:54:28.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.287 [2024-12-05T12:54:28.813Z] =================================================================================================================== 00:22:57.287 [2024-12-05T12:54:28.813Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:57.287 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2266043 00:22:57.544 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:57.544 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:57.545 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.545 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.545 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.545 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2264450 00:22:57.545 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2264450 ']' 00:22:57.545 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2264450 00:22:57.545 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:57.545 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.545 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2264450 00:22:57.545 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:57.545 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:57.545 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2264450' 00:22:57.545 killing process with pid 2264450 00:22:57.545 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2264450 00:22:57.545 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2264450 00:22:57.803 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:57.803 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:57.803 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.803 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.803 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2266310 00:22:57.803 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:57.803 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2266310 00:22:57.803 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2266310 ']' 00:22:57.803 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.803 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.803 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.803 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.803 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.061 [2024-12-05 13:54:29.338848] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:22:58.061 [2024-12-05 13:54:29.338929] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.061 [2024-12-05 13:54:29.411864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.061 [2024-12-05 13:54:29.469278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.061 [2024-12-05 13:54:29.469332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.061 [2024-12-05 13:54:29.469361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.061 [2024-12-05 13:54:29.469373] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.061 [2024-12-05 13:54:29.469383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.061 [2024-12-05 13:54:29.469999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.9mE94VFsKi 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.9mE94VFsKi 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.9mE94VFsKi 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9mE94VFsKi 00:22:58.319 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:58.577 [2024-12-05 13:54:29.927213] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.577 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:58.835 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:59.093 [2024-12-05 13:54:30.544909] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:59.093 [2024-12-05 13:54:30.545145] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.093 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:59.351 malloc0 00:22:59.351 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:59.630 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9mE94VFsKi 00:22:59.887 [2024-12-05 13:54:31.352835] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.9mE94VFsKi': 0100666 00:22:59.887 [2024-12-05 13:54:31.352873] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:59.887 request: 00:22:59.887 { 00:22:59.887 "name": "key0", 00:22:59.887 "path": "/tmp/tmp.9mE94VFsKi", 00:22:59.887 "method": "keyring_file_add_key", 00:22:59.887 "req_id": 1 00:22:59.887 } 00:22:59.887 Got JSON-RPC error response 00:22:59.887 response: 00:22:59.887 { 00:22:59.887 "code": -1, 00:22:59.887 "message": "Operation not permitted" 00:22:59.887 } 00:22:59.887 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:00.144 [2024-12-05 13:54:31.637610] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:00.144 [2024-12-05 13:54:31.637667] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:00.144 request: 00:23:00.144 { 00:23:00.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.144 "host": "nqn.2016-06.io.spdk:host1", 00:23:00.144 "psk": "key0", 00:23:00.144 "method": "nvmf_subsystem_add_host", 00:23:00.144 "req_id": 1 00:23:00.144 } 00:23:00.144 Got JSON-RPC error response 00:23:00.144 response: 00:23:00.144 { 00:23:00.144 "code": -32603, 00:23:00.144 "message": "Internal error" 00:23:00.144 } 00:23:00.144 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:00.144 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.144 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.144 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.144 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2266310 00:23:00.144 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2266310 ']' 00:23:00.144 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2266310 00:23:00.144 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:00.144 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.144 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2266310 00:23:00.401 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:00.402 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:00.402 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2266310' 00:23:00.402 killing process with pid 2266310 00:23:00.402 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2266310 00:23:00.402 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2266310 00:23:00.660 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.9mE94VFsKi 00:23:00.660 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:00.660 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:00.660 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.660 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.660 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2266611 00:23:00.660 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:00.660 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2266611 00:23:00.660 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2266611 ']' 00:23:00.660 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.660 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.660 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.660 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.660 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.660 [2024-12-05 13:54:31.993268] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:00.660 [2024-12-05 13:54:31.993368] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.660 [2024-12-05 13:54:32.062003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.660 [2024-12-05 13:54:32.110168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.660 [2024-12-05 13:54:32.110228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.660 [2024-12-05 13:54:32.110255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.660 [2024-12-05 13:54:32.110265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.660 [2024-12-05 13:54:32.110275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.660 [2024-12-05 13:54:32.110868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.918 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.918 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:00.918 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.918 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.918 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.918 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.918 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.9mE94VFsKi 00:23:00.918 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9mE94VFsKi 00:23:00.918 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:01.176 [2024-12-05 13:54:32.494040] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.176 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:01.434 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:01.692 [2024-12-05 13:54:33.031521] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:01.692 [2024-12-05 13:54:33.031761] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.692 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:01.949 malloc0 00:23:01.949 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:02.207 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9mE94VFsKi 00:23:02.464 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:02.722 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2266898 00:23:02.722 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:02.722 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:02.722 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2266898 /var/tmp/bdevperf.sock 00:23:02.722 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2266898 ']' 00:23:02.722 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.722 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.722 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.722 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.722 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.723 [2024-12-05 13:54:34.153484] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:02.723 [2024-12-05 13:54:34.153560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266898 ] 00:23:02.723 [2024-12-05 13:54:34.219954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.981 [2024-12-05 13:54:34.278541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.981 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.981 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:02.981 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9mE94VFsKi 00:23:03.239 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:03.496 [2024-12-05 13:54:34.922139] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:03.496 TLSTESTn1 00:23:03.496 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:04.059 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:04.059 "subsystems": [ 00:23:04.059 { 00:23:04.059 "subsystem": "keyring", 00:23:04.059 "config": [ 00:23:04.059 { 00:23:04.059 "method": "keyring_file_add_key", 00:23:04.059 "params": { 00:23:04.059 "name": "key0", 00:23:04.059 "path": "/tmp/tmp.9mE94VFsKi" 00:23:04.059 } 00:23:04.059 } 00:23:04.059 ] 00:23:04.059 }, 00:23:04.059 { 00:23:04.059 "subsystem": "iobuf", 00:23:04.059 "config": [ 00:23:04.059 { 00:23:04.059 "method": "iobuf_set_options", 00:23:04.059 "params": { 00:23:04.059 "small_pool_count": 8192, 00:23:04.059 "large_pool_count": 1024, 00:23:04.059 "small_bufsize": 8192, 00:23:04.059 "large_bufsize": 135168, 00:23:04.059 "enable_numa": false 00:23:04.059 } 00:23:04.059 } 00:23:04.059 ] 00:23:04.059 }, 00:23:04.059 { 00:23:04.059 "subsystem": "sock", 00:23:04.059 "config": [ 00:23:04.059 { 00:23:04.059 "method": "sock_set_default_impl", 00:23:04.059 "params": { 00:23:04.059 "impl_name": "posix" 00:23:04.059 } 00:23:04.059 }, 00:23:04.059 { 00:23:04.059 "method": "sock_impl_set_options", 00:23:04.059 "params": { 00:23:04.059 "impl_name": "ssl", 00:23:04.059 "recv_buf_size": 4096, 00:23:04.059 "send_buf_size": 4096, 00:23:04.059 "enable_recv_pipe": true, 00:23:04.059 "enable_quickack": false, 00:23:04.059 "enable_placement_id": 0, 00:23:04.059 "enable_zerocopy_send_server": true, 00:23:04.059 "enable_zerocopy_send_client": false, 00:23:04.059 "zerocopy_threshold": 0, 00:23:04.059 "tls_version": 0, 00:23:04.059 "enable_ktls": false 00:23:04.059 } 00:23:04.059 }, 00:23:04.059 { 00:23:04.059 "method": "sock_impl_set_options", 00:23:04.059 "params": { 00:23:04.059 "impl_name": "posix", 00:23:04.059 "recv_buf_size": 2097152, 00:23:04.059 "send_buf_size": 2097152, 00:23:04.059 "enable_recv_pipe": true, 00:23:04.059 "enable_quickack": false, 00:23:04.059 "enable_placement_id": 0, 00:23:04.059 "enable_zerocopy_send_server": true, 00:23:04.059 "enable_zerocopy_send_client": false, 00:23:04.059 "zerocopy_threshold": 0, 00:23:04.059 "tls_version": 0, 00:23:04.059 "enable_ktls": false 00:23:04.059 } 00:23:04.059 } 00:23:04.059 ] 00:23:04.059 }, 00:23:04.059 { 00:23:04.059 "subsystem": "vmd", 00:23:04.059 "config": [] 00:23:04.059 }, 00:23:04.059 { 00:23:04.059 "subsystem": "accel", 00:23:04.059 "config": [ 00:23:04.059 { 00:23:04.059 "method": "accel_set_options", 00:23:04.059 "params": { 00:23:04.060 "small_cache_size": 128, 00:23:04.060 "large_cache_size": 16, 00:23:04.060 "task_count": 2048, 00:23:04.060 "sequence_count": 2048, 00:23:04.060 "buf_count": 2048 00:23:04.060 } 00:23:04.060 } 00:23:04.060 ] 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "subsystem": "bdev", 00:23:04.060 "config": [ 00:23:04.060 { 00:23:04.060 "method": "bdev_set_options", 00:23:04.060 "params": { 00:23:04.060 "bdev_io_pool_size": 65535, 00:23:04.060 "bdev_io_cache_size": 256, 00:23:04.060 "bdev_auto_examine": true, 00:23:04.060 "iobuf_small_cache_size": 128, 00:23:04.060 "iobuf_large_cache_size": 16 00:23:04.060 } 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "method": "bdev_raid_set_options", 00:23:04.060 "params": { 00:23:04.060 "process_window_size_kb": 1024, 00:23:04.060 "process_max_bandwidth_mb_sec": 0 00:23:04.060 } 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "method": "bdev_iscsi_set_options", 00:23:04.060 "params": { 00:23:04.060 "timeout_sec": 30 00:23:04.060 } 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "method": "bdev_nvme_set_options", 00:23:04.060 "params": { 00:23:04.060 "action_on_timeout": "none", 00:23:04.060 "timeout_us": 0, 00:23:04.060 "timeout_admin_us": 0, 00:23:04.060 "keep_alive_timeout_ms": 10000, 00:23:04.060 "arbitration_burst": 0, 00:23:04.060 "low_priority_weight": 0, 00:23:04.060 "medium_priority_weight": 0, 00:23:04.060 "high_priority_weight": 0, 00:23:04.060 "nvme_adminq_poll_period_us": 10000, 00:23:04.060 "nvme_ioq_poll_period_us": 0, 00:23:04.060 "io_queue_requests": 0, 00:23:04.060 "delay_cmd_submit": true, 00:23:04.060 "transport_retry_count": 4, 00:23:04.060 "bdev_retry_count": 3, 00:23:04.060 "transport_ack_timeout": 0, 00:23:04.060 "ctrlr_loss_timeout_sec": 0, 00:23:04.060 "reconnect_delay_sec": 0, 00:23:04.060 "fast_io_fail_timeout_sec": 0, 00:23:04.060 "disable_auto_failback": false, 00:23:04.060 "generate_uuids": false, 00:23:04.060 "transport_tos": 0, 00:23:04.060 "nvme_error_stat": false, 00:23:04.060 "rdma_srq_size": 0, 00:23:04.060 "io_path_stat": false, 00:23:04.060 "allow_accel_sequence": false, 00:23:04.060 "rdma_max_cq_size": 0, 00:23:04.060 "rdma_cm_event_timeout_ms": 0, 00:23:04.060 "dhchap_digests": [ 00:23:04.060 "sha256", 00:23:04.060 "sha384", 00:23:04.060 "sha512" 00:23:04.060 ], 00:23:04.060 "dhchap_dhgroups": [ 00:23:04.060 "null", 00:23:04.060 "ffdhe2048", 00:23:04.060 "ffdhe3072", 00:23:04.060 "ffdhe4096", 00:23:04.060 "ffdhe6144", 00:23:04.060 "ffdhe8192" 00:23:04.060 ] 00:23:04.060 } 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "method": "bdev_nvme_set_hotplug", 00:23:04.060 "params": { 00:23:04.060 "period_us": 100000, 00:23:04.060 "enable": false 00:23:04.060 } 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "method": "bdev_malloc_create", 00:23:04.060 "params": { 00:23:04.060 "name": "malloc0", 00:23:04.060 "num_blocks": 8192, 00:23:04.060 "block_size": 4096, 00:23:04.060 "physical_block_size": 4096, 00:23:04.060 "uuid": "44fef4d3-3b7d-43b4-9931-0d91677b7ef8", 00:23:04.060 "optimal_io_boundary": 0, 00:23:04.060 "md_size": 0, 00:23:04.060 "dif_type": 0, 00:23:04.060 "dif_is_head_of_md": false, 00:23:04.060 "dif_pi_format": 0 00:23:04.060 } 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "method": "bdev_wait_for_examine" 00:23:04.060 } 00:23:04.060 ] 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "subsystem": "nbd", 00:23:04.060 "config": [] 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "subsystem": "scheduler", 00:23:04.060 "config": [ 00:23:04.060 { 00:23:04.060 "method": "framework_set_scheduler", 00:23:04.060 "params": { 00:23:04.060 "name": "static" 00:23:04.060 } 00:23:04.060 } 00:23:04.060 ] 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "subsystem": "nvmf", 00:23:04.060 "config": [ 00:23:04.060 { 00:23:04.060 "method": "nvmf_set_config", 00:23:04.060 "params": { 00:23:04.060 "discovery_filter": "match_any", 00:23:04.060 "admin_cmd_passthru": { 00:23:04.060 "identify_ctrlr": false 00:23:04.060 }, 00:23:04.060 "dhchap_digests": [ 00:23:04.060 "sha256", 00:23:04.060 "sha384", 00:23:04.060 "sha512" 00:23:04.060 ], 00:23:04.060 "dhchap_dhgroups": [ 00:23:04.060 "null", 00:23:04.060 "ffdhe2048", 00:23:04.060 "ffdhe3072", 00:23:04.060 "ffdhe4096", 00:23:04.060 "ffdhe6144", 00:23:04.060 "ffdhe8192" 00:23:04.060 ] 00:23:04.060 } 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "method": "nvmf_set_max_subsystems", 00:23:04.060 "params": { 00:23:04.060 "max_subsystems": 1024 00:23:04.060 } 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "method": "nvmf_set_crdt", 00:23:04.060 "params": { 00:23:04.060 "crdt1": 0, 00:23:04.060 "crdt2": 0, 00:23:04.060 "crdt3": 0 00:23:04.060 } 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "method": "nvmf_create_transport", 00:23:04.060 "params": { 00:23:04.060 "trtype": "TCP", 00:23:04.060 "max_queue_depth": 128, 00:23:04.060 "max_io_qpairs_per_ctrlr": 127, 00:23:04.060 "in_capsule_data_size": 4096, 00:23:04.060 "max_io_size": 131072, 00:23:04.060 "io_unit_size": 131072, 00:23:04.060 "max_aq_depth": 128, 00:23:04.060 "num_shared_buffers": 511, 00:23:04.060 "buf_cache_size": 4294967295, 00:23:04.060 "dif_insert_or_strip": false, 00:23:04.060 "zcopy": false, 00:23:04.060 "c2h_success": false, 00:23:04.060 "sock_priority": 0, 00:23:04.060 "abort_timeout_sec": 1, 00:23:04.060 "ack_timeout": 0, 00:23:04.060 "data_wr_pool_size": 0 00:23:04.060 } 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "method": "nvmf_create_subsystem", 00:23:04.060 "params": { 00:23:04.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.060 "allow_any_host": false, 00:23:04.060 "serial_number": "SPDK00000000000001", 00:23:04.060 "model_number": "SPDK bdev Controller", 00:23:04.060 "max_namespaces": 10, 00:23:04.060 "min_cntlid": 1, 00:23:04.060 "max_cntlid": 65519, 00:23:04.060 "ana_reporting": false 00:23:04.060 } 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "method": "nvmf_subsystem_add_host", 00:23:04.060 "params": { 00:23:04.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.060 "host": "nqn.2016-06.io.spdk:host1", 00:23:04.060 "psk": "key0" 00:23:04.060 } 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "method": "nvmf_subsystem_add_ns", 00:23:04.060 "params": { 00:23:04.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.060 "namespace": { 00:23:04.060 "nsid": 1, 00:23:04.060 "bdev_name": "malloc0", 00:23:04.060 "nguid": "44FEF4D33B7D43B499310D91677B7EF8", 00:23:04.060 "uuid": "44fef4d3-3b7d-43b4-9931-0d91677b7ef8", 00:23:04.060 "no_auto_visible": false 00:23:04.060 } 00:23:04.060 } 00:23:04.060 }, 00:23:04.060 { 00:23:04.060 "method": "nvmf_subsystem_add_listener", 00:23:04.060 "params": { 00:23:04.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.061 "listen_address": { 00:23:04.061 "trtype": "TCP", 00:23:04.061 "adrfam": "IPv4", 00:23:04.061 "traddr": "10.0.0.2", 00:23:04.061 "trsvcid": "4420" 00:23:04.061 }, 00:23:04.061 "secure_channel": true 00:23:04.061 } 00:23:04.061 } 00:23:04.061 ] 00:23:04.061 } 00:23:04.061 ] 00:23:04.061 }' 00:23:04.061 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:04.318 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:04.318 "subsystems": [ 00:23:04.318 { 00:23:04.318 "subsystem": "keyring", 00:23:04.318 "config": [ 00:23:04.318 { 00:23:04.318 "method": "keyring_file_add_key", 00:23:04.318 "params": { 00:23:04.318 "name": "key0", 00:23:04.318 "path": "/tmp/tmp.9mE94VFsKi" 00:23:04.318 } 00:23:04.318 } 00:23:04.318 ] 00:23:04.318 }, 00:23:04.318 { 00:23:04.318 "subsystem": "iobuf", 00:23:04.318 "config": [ 00:23:04.318 { 00:23:04.318 "method": "iobuf_set_options", 00:23:04.318 "params": { 00:23:04.318 "small_pool_count": 8192, 00:23:04.318 "large_pool_count": 1024, 00:23:04.318 "small_bufsize": 8192, 00:23:04.318 "large_bufsize": 135168, 00:23:04.318 "enable_numa": false 00:23:04.318 } 00:23:04.318 } 00:23:04.318 ] 00:23:04.318 }, 00:23:04.318 { 00:23:04.318 "subsystem": "sock", 00:23:04.318 "config": [ 00:23:04.318 { 00:23:04.318 "method": "sock_set_default_impl", 00:23:04.318 "params": { 00:23:04.318 "impl_name": "posix" 00:23:04.318 } 00:23:04.318 }, 00:23:04.318 { 00:23:04.318 "method": "sock_impl_set_options", 00:23:04.318 "params": { 00:23:04.318 "impl_name": "ssl", 00:23:04.318 "recv_buf_size": 4096, 00:23:04.318 "send_buf_size": 4096, 00:23:04.318 "enable_recv_pipe": true, 00:23:04.318 "enable_quickack": false, 00:23:04.318 "enable_placement_id": 0, 00:23:04.318 "enable_zerocopy_send_server": true, 00:23:04.318 "enable_zerocopy_send_client": false, 00:23:04.318 "zerocopy_threshold": 0, 00:23:04.318 "tls_version": 0, 00:23:04.318 "enable_ktls": false 00:23:04.318 } 00:23:04.318 }, 00:23:04.318 { 00:23:04.318 "method": "sock_impl_set_options", 00:23:04.318 "params": { 00:23:04.318 "impl_name": "posix", 00:23:04.318 "recv_buf_size": 2097152, 00:23:04.318 "send_buf_size": 2097152, 00:23:04.318 "enable_recv_pipe": true, 00:23:04.318 "enable_quickack": false, 00:23:04.318 "enable_placement_id": 0, 00:23:04.318 "enable_zerocopy_send_server": true, 00:23:04.318 "enable_zerocopy_send_client": false, 00:23:04.318 "zerocopy_threshold": 0, 00:23:04.318 "tls_version": 0, 00:23:04.318 "enable_ktls": false 00:23:04.318 } 00:23:04.318 } 00:23:04.318 ] 00:23:04.318 }, 00:23:04.318 { 00:23:04.318 "subsystem": "vmd", 00:23:04.318 "config": [] 00:23:04.318 }, 00:23:04.318 { 00:23:04.318 "subsystem": "accel", 00:23:04.318 "config": [ 00:23:04.318 { 00:23:04.318 "method": "accel_set_options", 00:23:04.318 "params": { 00:23:04.318 "small_cache_size": 128, 00:23:04.318 "large_cache_size": 16, 00:23:04.318 "task_count": 2048, 00:23:04.318 "sequence_count": 2048, 00:23:04.318 "buf_count": 2048 00:23:04.318 } 00:23:04.318 } 00:23:04.318 ] 00:23:04.318 }, 00:23:04.318 { 00:23:04.318 "subsystem": "bdev", 00:23:04.318 "config": [ 00:23:04.318 { 00:23:04.318 "method": "bdev_set_options", 00:23:04.319 "params": { 00:23:04.319 "bdev_io_pool_size": 65535, 00:23:04.319 "bdev_io_cache_size": 256, 00:23:04.319 "bdev_auto_examine": true, 00:23:04.319 "iobuf_small_cache_size": 128, 00:23:04.319 "iobuf_large_cache_size": 16 00:23:04.319 } 00:23:04.319 }, 00:23:04.319 { 00:23:04.319 "method": "bdev_raid_set_options", 00:23:04.319 "params": { 00:23:04.319 "process_window_size_kb": 1024, 00:23:04.319 "process_max_bandwidth_mb_sec": 0 00:23:04.319 } 00:23:04.319 }, 00:23:04.319 { 00:23:04.319 "method": "bdev_iscsi_set_options", 00:23:04.319 "params": { 00:23:04.319 "timeout_sec": 30 00:23:04.319 } 00:23:04.319 }, 00:23:04.319 { 00:23:04.319 "method": "bdev_nvme_set_options", 00:23:04.319 "params": { 00:23:04.319 "action_on_timeout": "none", 00:23:04.319 "timeout_us": 0, 00:23:04.319 "timeout_admin_us": 0, 00:23:04.319 "keep_alive_timeout_ms": 10000, 00:23:04.319 "arbitration_burst": 0, 00:23:04.319 "low_priority_weight": 0, 00:23:04.319 "medium_priority_weight": 0, 00:23:04.319 "high_priority_weight": 0, 00:23:04.319 "nvme_adminq_poll_period_us": 10000, 00:23:04.319 "nvme_ioq_poll_period_us": 0, 00:23:04.319 "io_queue_requests": 512, 00:23:04.319 "delay_cmd_submit": true, 00:23:04.319 "transport_retry_count": 4, 00:23:04.319 "bdev_retry_count": 3, 00:23:04.319 "transport_ack_timeout": 0, 00:23:04.319 "ctrlr_loss_timeout_sec": 0, 00:23:04.319 "reconnect_delay_sec": 0, 00:23:04.319 "fast_io_fail_timeout_sec": 0, 00:23:04.319 "disable_auto_failback": false, 00:23:04.319 "generate_uuids": false, 00:23:04.319 "transport_tos": 0, 00:23:04.319 "nvme_error_stat": false, 00:23:04.319 "rdma_srq_size": 0, 00:23:04.319 "io_path_stat": false, 00:23:04.319 "allow_accel_sequence": false, 00:23:04.319 "rdma_max_cq_size": 0, 00:23:04.319 "rdma_cm_event_timeout_ms": 0, 00:23:04.319 "dhchap_digests": [ 00:23:04.319 "sha256", 00:23:04.319 "sha384", 00:23:04.319 "sha512" 00:23:04.319 ], 00:23:04.319 "dhchap_dhgroups": [ 00:23:04.319 "null", 00:23:04.319 "ffdhe2048", 00:23:04.319 "ffdhe3072", 00:23:04.319 "ffdhe4096", 00:23:04.319 "ffdhe6144", 00:23:04.319 "ffdhe8192" 00:23:04.319 ] 00:23:04.319 } 00:23:04.319 }, 00:23:04.319 { 00:23:04.319 "method": "bdev_nvme_attach_controller", 00:23:04.319 "params": { 00:23:04.319 "name": "TLSTEST", 00:23:04.319 "trtype": "TCP", 00:23:04.319 "adrfam": "IPv4", 00:23:04.319 "traddr": "10.0.0.2", 00:23:04.319 "trsvcid": "4420", 00:23:04.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.319 "prchk_reftag": false, 00:23:04.319 "prchk_guard": false, 00:23:04.319 "ctrlr_loss_timeout_sec": 0, 00:23:04.319 "reconnect_delay_sec": 0, 00:23:04.319 "fast_io_fail_timeout_sec": 0, 00:23:04.319 "psk": "key0", 00:23:04.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:04.319 "hdgst": false, 00:23:04.319 "ddgst": false, 00:23:04.319 "multipath": "multipath" 00:23:04.319 } 00:23:04.319 }, 00:23:04.319 { 00:23:04.319 "method": "bdev_nvme_set_hotplug", 00:23:04.319 "params": { 00:23:04.319 "period_us": 100000, 00:23:04.319 "enable": false 00:23:04.319 } 00:23:04.319 }, 00:23:04.319 { 00:23:04.319 "method": "bdev_wait_for_examine" 00:23:04.319 } 00:23:04.319 ] 00:23:04.319 }, 00:23:04.319 { 00:23:04.319 "subsystem": "nbd", 00:23:04.319 "config": [] 00:23:04.319 } 00:23:04.319 ] 00:23:04.319 }' 00:23:04.319 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2266898 00:23:04.319 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2266898 ']' 00:23:04.319 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2266898 00:23:04.319 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:04.319 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.319 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2266898 00:23:04.319 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:04.319 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:04.319 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2266898' 00:23:04.319 killing process with pid 2266898 00:23:04.319 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2266898 00:23:04.319 Received shutdown signal, test time was about 10.000000 seconds 00:23:04.319 00:23:04.319 Latency(us) 00:23:04.319 [2024-12-05T12:54:35.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.319 [2024-12-05T12:54:35.845Z] =================================================================================================================== 00:23:04.319 [2024-12-05T12:54:35.845Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:04.319 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2266898 00:23:04.577 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2266611 00:23:04.577 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2266611 ']' 00:23:04.577 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2266611 00:23:04.577 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:04.577 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.577 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2266611 00:23:04.577 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:04.577 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:04.577 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2266611' 00:23:04.577 killing process with pid 2266611 00:23:04.577 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2266611 00:23:04.577 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2266611 00:23:04.835 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:04.835 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:04.835 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:04.835 "subsystems": [ 00:23:04.835 { 00:23:04.835 "subsystem": "keyring", 00:23:04.835 "config": [ 00:23:04.835 { 00:23:04.835 "method": "keyring_file_add_key", 00:23:04.835 "params": { 00:23:04.835 "name": "key0", 00:23:04.835 "path": "/tmp/tmp.9mE94VFsKi" 00:23:04.835 } 00:23:04.835 } 00:23:04.835 ] 00:23:04.835 }, 00:23:04.835 { 00:23:04.835 "subsystem": "iobuf", 00:23:04.835 "config": [ 00:23:04.835 { 00:23:04.835 "method": "iobuf_set_options", 00:23:04.835 "params": { 00:23:04.835 "small_pool_count": 8192, 00:23:04.835 "large_pool_count": 1024, 00:23:04.835 "small_bufsize": 8192, 00:23:04.835 "large_bufsize": 135168, 00:23:04.835 "enable_numa": false 00:23:04.835 } 00:23:04.835 } 00:23:04.835 ] 00:23:04.835 }, 00:23:04.835 { 00:23:04.835 "subsystem": "sock", 00:23:04.835 "config": [ 00:23:04.835 { 00:23:04.835 "method": "sock_set_default_impl", 00:23:04.835 "params": { 00:23:04.835 "impl_name": "posix" 00:23:04.835 } 00:23:04.835 }, 00:23:04.835 { 00:23:04.835 "method": "sock_impl_set_options", 00:23:04.835 "params": { 00:23:04.835 "impl_name": "ssl", 00:23:04.835 "recv_buf_size": 4096, 00:23:04.835 "send_buf_size": 4096, 00:23:04.835 "enable_recv_pipe": true, 00:23:04.835 "enable_quickack": false, 00:23:04.835 "enable_placement_id": 0, 00:23:04.835 "enable_zerocopy_send_server": true, 00:23:04.835 "enable_zerocopy_send_client": false, 00:23:04.835 "zerocopy_threshold": 0, 00:23:04.835 "tls_version": 0, 00:23:04.835 "enable_ktls": false 00:23:04.835 } 00:23:04.835 }, 00:23:04.835 { 00:23:04.835 "method": "sock_impl_set_options", 00:23:04.835 "params": { 00:23:04.835 "impl_name": "posix", 00:23:04.835 "recv_buf_size": 2097152, 00:23:04.835 "send_buf_size": 2097152, 00:23:04.835 "enable_recv_pipe": true, 00:23:04.835 "enable_quickack": false, 00:23:04.835 "enable_placement_id": 0, 00:23:04.835 "enable_zerocopy_send_server": true, 00:23:04.835 "enable_zerocopy_send_client": false, 00:23:04.835 "zerocopy_threshold": 0, 00:23:04.835 "tls_version": 0, 00:23:04.835 "enable_ktls": false 00:23:04.835 } 00:23:04.835 } 00:23:04.835 ] 00:23:04.835 }, 00:23:04.835 { 00:23:04.835 "subsystem": "vmd", 00:23:04.835 "config": [] 00:23:04.835 }, 00:23:04.835 { 00:23:04.835 "subsystem": "accel", 00:23:04.835 "config": [ 00:23:04.835 { 00:23:04.835 "method": "accel_set_options", 00:23:04.835 "params": { 00:23:04.835 "small_cache_size": 128, 00:23:04.835 "large_cache_size": 16, 00:23:04.835 "task_count": 2048, 00:23:04.835 "sequence_count": 2048, 00:23:04.835 "buf_count": 2048 00:23:04.835 } 00:23:04.835 } 00:23:04.835 ] 00:23:04.835 }, 00:23:04.835 { 00:23:04.835 "subsystem": "bdev", 00:23:04.835 "config": [ 00:23:04.835 { 00:23:04.835 "method": "bdev_set_options", 00:23:04.835 "params": { 00:23:04.835 "bdev_io_pool_size": 65535, 00:23:04.835 "bdev_io_cache_size": 256, 00:23:04.835 "bdev_auto_examine": true, 00:23:04.835 "iobuf_small_cache_size": 128, 00:23:04.835 "iobuf_large_cache_size": 16 00:23:04.835 } 00:23:04.835 }, 00:23:04.835 { 00:23:04.835 "method": "bdev_raid_set_options", 00:23:04.835 "params": { 00:23:04.835 "process_window_size_kb": 1024, 00:23:04.835 "process_max_bandwidth_mb_sec": 0 00:23:04.835 } 00:23:04.835 }, 00:23:04.835 { 00:23:04.835 "method": "bdev_iscsi_set_options", 00:23:04.835 "params": { 00:23:04.835 "timeout_sec": 30 00:23:04.835 } 00:23:04.835 }, 00:23:04.835 { 00:23:04.835 "method": "bdev_nvme_set_options", 00:23:04.835 "params": { 00:23:04.835 "action_on_timeout": "none", 00:23:04.836 "timeout_us": 0, 00:23:04.836 "timeout_admin_us": 0, 00:23:04.836 "keep_alive_timeout_ms": 10000, 00:23:04.836 "arbitration_burst": 0, 00:23:04.836 "low_priority_weight": 0, 00:23:04.836 "medium_priority_weight": 0, 00:23:04.836 "high_priority_weight": 0, 00:23:04.836 "nvme_adminq_poll_period_us": 10000, 00:23:04.836 "nvme_ioq_poll_period_us": 0, 00:23:04.836 "io_queue_requests": 0, 00:23:04.836 "delay_cmd_submit": true, 00:23:04.836 "transport_retry_count": 4, 00:23:04.836 "bdev_retry_count": 3, 00:23:04.836 "transport_ack_timeout": 0, 00:23:04.836 "ctrlr_loss_timeout_sec": 0, 00:23:04.836 "reconnect_delay_sec": 0, 00:23:04.836 "fast_io_fail_timeout_sec": 0, 00:23:04.836 "disable_auto_failback": false, 00:23:04.836 "generate_uuids": false, 00:23:04.836 "transport_tos": 0, 00:23:04.836 "nvme_error_stat": false, 00:23:04.836 "rdma_srq_size": 0, 00:23:04.836 "io_path_stat": false, 00:23:04.836 "allow_accel_sequence": false, 00:23:04.836 "rdma_max_cq_size": 0, 00:23:04.836 "rdma_cm_event_timeout_ms": 0, 00:23:04.836 "dhchap_digests": [ 00:23:04.836 "sha256", 00:23:04.836 "sha384", 00:23:04.836 "sha512" 00:23:04.836 ], 00:23:04.836 "dhchap_dhgroups": [ 00:23:04.836 "null", 00:23:04.836 "ffdhe2048", 00:23:04.836 "ffdhe3072", 00:23:04.836 "ffdhe4096", 00:23:04.836 "ffdhe6144", 00:23:04.836 "ffdhe8192" 00:23:04.836 ] 00:23:04.836 } 00:23:04.836 }, 00:23:04.836 { 00:23:04.836 "method": "bdev_nvme_set_hotplug", 00:23:04.836 "params": { 00:23:04.836 "period_us": 100000, 00:23:04.836 "enable": false 00:23:04.836 } 00:23:04.836 }, 00:23:04.836 { 00:23:04.836 "method": "bdev_malloc_create", 00:23:04.836 "params": { 00:23:04.836 "name": "malloc0", 00:23:04.836 "num_blocks": 8192, 00:23:04.836 "block_size": 4096, 00:23:04.836 "physical_block_size": 4096, 00:23:04.836 "uuid": "44fef4d3-3b7d-43b4-9931-0d91677b7ef8", 00:23:04.836 "optimal_io_boundary": 0, 00:23:04.836 "md_size": 0, 00:23:04.836 "dif_type": 0, 00:23:04.836 "dif_is_head_of_md": false, 00:23:04.836 "dif_pi_format": 0 00:23:04.836 } 00:23:04.836 }, 00:23:04.836 { 00:23:04.836 "method": "bdev_wait_for_examine" 00:23:04.836 } 00:23:04.836 ] 00:23:04.836 }, 00:23:04.836 { 00:23:04.836 "subsystem": "nbd", 00:23:04.836 "config": [] 00:23:04.836 }, 00:23:04.836 { 00:23:04.836 "subsystem": "scheduler", 00:23:04.836 "config": [ 00:23:04.836 { 00:23:04.836 "method": "framework_set_scheduler", 00:23:04.836 "params": { 00:23:04.836 "name": "static" 00:23:04.836 } 00:23:04.836 } 00:23:04.836 ] 00:23:04.836 }, 00:23:04.836 { 00:23:04.836 "subsystem": "nvmf", 00:23:04.836 "config": [ 00:23:04.836 { 00:23:04.836 "method": "nvmf_set_config", 00:23:04.836 "params": { 00:23:04.836 "discovery_filter": "match_any", 00:23:04.836 "admin_cmd_passthru": { 00:23:04.836 "identify_ctrlr": false 00:23:04.836 }, 00:23:04.836 "dhchap_digests": [ 00:23:04.836 "sha256", 00:23:04.836 "sha384", 00:23:04.836 "sha512" 00:23:04.836 ], 00:23:04.836 "dhchap_dhgroups": [ 00:23:04.836 "null", 00:23:04.836 "ffdhe2048", 00:23:04.836 "ffdhe3072", 00:23:04.836 "ffdhe4096", 00:23:04.836 "ffdhe6144", 00:23:04.836 "ffdhe8192" 00:23:04.836 ] 00:23:04.836 } 00:23:04.836 }, 00:23:04.836 { 00:23:04.836 "method": "nvmf_set_max_subsystems", 00:23:04.836 "params": { 00:23:04.836 "max_subsystems": 1024 00:23:04.836 } 00:23:04.836 }, 00:23:04.836 { 00:23:04.836 "method": "nvmf_set_crdt", 00:23:04.836 "params": { 00:23:04.836 "crdt1": 0, 00:23:04.836 "crdt2": 0, 00:23:04.836 "crdt3": 0 00:23:04.836 } 00:23:04.836 }, 00:23:04.836 { 00:23:04.836 "method": "nvmf_create_transport", 00:23:04.836 "params": { 00:23:04.836 "trtype": "TCP", 00:23:04.836 "max_queue_depth": 128, 00:23:04.836 "max_io_qpairs_per_ctrlr": 127, 00:23:04.836 "in_capsule_data_size": 4096, 00:23:04.836 "max_io_size": 131072, 00:23:04.836 "io_unit_size": 131072, 00:23:04.836 "max_aq_depth": 128, 00:23:04.836 "num_shared_buffers": 511, 00:23:04.836 "buf_cache_size": 4294967295, 00:23:04.836 "dif_insert_or_strip": false, 00:23:04.836 "zcopy": false, 00:23:04.836 "c2h_success": false, 00:23:04.836 "sock_priority": 0, 00:23:04.836 "abort_timeout_sec": 1, 00:23:04.836 "ack_timeout": 0, 00:23:04.836 "data_wr_pool_size": 0 00:23:04.836 } 00:23:04.836 }, 00:23:04.836 { 00:23:04.836 "method": "nvmf_create_subsystem", 00:23:04.836 "params": { 00:23:04.836 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.836 "allow_any_host": false, 00:23:04.836 "serial_number": "SPDK00000000000001", 00:23:04.836 "model_number": "SPDK bdev Controller", 00:23:04.836 "max_namespaces": 10, 00:23:04.836 "min_cntlid": 1, 00:23:04.836 "max_cntlid": 65519, 00:23:04.836 "ana_reporting": false 00:23:04.836 } 00:23:04.836 }, 00:23:04.836 { 00:23:04.836 "method": "nvmf_subsystem_add_host", 00:23:04.836 "params": { 00:23:04.836 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.836 "host": "nqn.2016-06.io.spdk:host1", 00:23:04.836 "psk": "key0" 00:23:04.836 } 00:23:04.836 }, 00:23:04.836 { 00:23:04.836 "method": "nvmf_subsystem_add_ns", 00:23:04.836 "params": { 00:23:04.836 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.836 "namespace": { 00:23:04.836 "nsid": 1, 00:23:04.836 "bdev_name": "malloc0", 00:23:04.836 "nguid": "44FEF4D33B7D43B499310D91677B7EF8", 00:23:04.836 "uuid": "44fef4d3-3b7d-43b4-9931-0d91677b7ef8", 00:23:04.836 "no_auto_visible": false 00:23:04.836 } 00:23:04.836 } 00:23:04.836 }, 00:23:04.836 { 00:23:04.836 "method": "nvmf_subsystem_add_listener", 00:23:04.836 "params": { 00:23:04.836 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.836 "listen_address": { 00:23:04.836 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:04.836 "trtype": "TCP", 00:23:04.836 "adrfam": "IPv4", 00:23:04.836 "traddr": "10.0.0.2", 00:23:04.836 "trsvcid": "4420" 00:23:04.836 }, 00:23:04.836 "secure_channel": true 00:23:04.836 } 00:23:04.836 } 00:23:04.836 ] 00:23:04.836 } 00:23:04.836 ] 00:23:04.836 }' 00:23:04.836 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.836 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2267178 00:23:04.836 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:04.836 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2267178 00:23:04.836 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2267178 ']' 00:23:04.836 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.836 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.836 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.836 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.836 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.836 [2024-12-05 13:54:36.234203] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:04.836 [2024-12-05 13:54:36.234296] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.836 [2024-12-05 13:54:36.307572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.094 [2024-12-05 13:54:36.364836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.094 [2024-12-05 13:54:36.364882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.094 [2024-12-05 13:54:36.364910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.094 [2024-12-05 13:54:36.364922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.094 [2024-12-05 13:54:36.364932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.094 [2024-12-05 13:54:36.365610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.094 [2024-12-05 13:54:36.609262] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.351 [2024-12-05 13:54:36.641281] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:05.351 [2024-12-05 13:54:36.641539] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2267331 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2267331 /var/tmp/bdevperf.sock 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2267331 ']' 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.917 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:05.917 "subsystems": [ 00:23:05.917 { 00:23:05.917 "subsystem": "keyring", 00:23:05.917 "config": [ 00:23:05.917 { 00:23:05.917 "method": "keyring_file_add_key", 00:23:05.917 "params": { 00:23:05.917 "name": "key0", 00:23:05.917 "path": "/tmp/tmp.9mE94VFsKi" 00:23:05.917 } 00:23:05.917 } 00:23:05.917 ] 00:23:05.917 }, 00:23:05.917 { 00:23:05.917 "subsystem": "iobuf", 00:23:05.917 "config": [ 00:23:05.917 { 00:23:05.917 "method": "iobuf_set_options", 00:23:05.917 "params": { 00:23:05.917 "small_pool_count": 8192, 00:23:05.917 "large_pool_count": 1024, 00:23:05.917 "small_bufsize": 8192, 00:23:05.917 "large_bufsize": 135168, 00:23:05.917 "enable_numa": false 00:23:05.917 } 00:23:05.917 } 00:23:05.917 ] 00:23:05.917 }, 00:23:05.917 { 00:23:05.917 "subsystem": "sock", 00:23:05.917 "config": [ 00:23:05.917 { 00:23:05.917 "method": "sock_set_default_impl", 00:23:05.917 "params": { 00:23:05.917 "impl_name": "posix" 00:23:05.917 } 00:23:05.917 }, 00:23:05.917 { 00:23:05.917 "method": "sock_impl_set_options", 00:23:05.917 "params": { 00:23:05.917 "impl_name": "ssl", 00:23:05.917 "recv_buf_size": 4096, 00:23:05.917 "send_buf_size": 4096, 00:23:05.917 "enable_recv_pipe": true, 00:23:05.917 "enable_quickack": false, 00:23:05.917 "enable_placement_id": 0, 00:23:05.917 "enable_zerocopy_send_server": true, 00:23:05.917 "enable_zerocopy_send_client": false, 00:23:05.917 "zerocopy_threshold": 0, 00:23:05.917 "tls_version": 0, 00:23:05.917 "enable_ktls": false 00:23:05.917 } 00:23:05.917 }, 00:23:05.917 { 00:23:05.917 "method": "sock_impl_set_options", 00:23:05.917 "params": { 00:23:05.917 "impl_name": "posix", 00:23:05.917 "recv_buf_size": 2097152, 00:23:05.917 "send_buf_size": 2097152, 00:23:05.917 "enable_recv_pipe": true, 00:23:05.917 "enable_quickack": false, 00:23:05.917 "enable_placement_id": 0, 00:23:05.917 "enable_zerocopy_send_server": true, 00:23:05.917 "enable_zerocopy_send_client": false, 00:23:05.917 "zerocopy_threshold": 0, 00:23:05.917 "tls_version": 0, 00:23:05.917 "enable_ktls": false 00:23:05.917 } 00:23:05.917 } 00:23:05.917 ] 00:23:05.917 }, 00:23:05.917 { 00:23:05.917 "subsystem": "vmd", 00:23:05.917 "config": [] 00:23:05.917 }, 00:23:05.917 { 00:23:05.917 "subsystem": "accel", 00:23:05.917 "config": [ 00:23:05.917 { 00:23:05.917 "method": "accel_set_options", 00:23:05.918 "params": { 00:23:05.918 "small_cache_size": 128, 00:23:05.918 "large_cache_size": 16, 00:23:05.918 "task_count": 2048, 00:23:05.918 "sequence_count": 2048, 00:23:05.918 "buf_count": 2048 00:23:05.918 } 00:23:05.918 } 00:23:05.918 ] 00:23:05.918 }, 00:23:05.918 { 00:23:05.918 "subsystem": "bdev", 00:23:05.918 "config": [ 00:23:05.918 { 00:23:05.918 "method": "bdev_set_options", 00:23:05.918 "params": { 00:23:05.918 "bdev_io_pool_size": 65535, 00:23:05.918 "bdev_io_cache_size": 256, 00:23:05.918 "bdev_auto_examine": true, 00:23:05.918 "iobuf_small_cache_size": 128, 00:23:05.918 "iobuf_large_cache_size": 16 00:23:05.918 } 00:23:05.918 }, 00:23:05.918 { 00:23:05.918 "method": "bdev_raid_set_options", 00:23:05.918 "params": { 00:23:05.918 "process_window_size_kb": 1024, 00:23:05.918 "process_max_bandwidth_mb_sec": 0 00:23:05.918 } 00:23:05.918 }, 00:23:05.918 { 00:23:05.918 "method": "bdev_iscsi_set_options", 00:23:05.918 "params": { 00:23:05.918 "timeout_sec": 30 00:23:05.918 } 00:23:05.918 }, 00:23:05.918 { 00:23:05.918 "method": "bdev_nvme_set_options", 00:23:05.918 "params": { 00:23:05.918 "action_on_timeout": "none", 00:23:05.918 "timeout_us": 0, 00:23:05.918 "timeout_admin_us": 0, 00:23:05.918 "keep_alive_timeout_ms": 10000, 00:23:05.918 "arbitration_burst": 0, 00:23:05.918 "low_priority_weight": 0, 00:23:05.918 "medium_priority_weight": 0, 00:23:05.918 "high_priority_weight": 0, 00:23:05.918 "nvme_adminq_poll_period_us": 10000, 00:23:05.918 "nvme_ioq_poll_period_us": 0, 00:23:05.918 "io_queue_requests": 512, 00:23:05.918 "delay_cmd_submit": true, 00:23:05.918 "transport_retry_count": 4, 00:23:05.918 "bdev_retry_count": 3, 00:23:05.918 "transport_ack_timeout": 0, 00:23:05.918 "ctrlr_loss_timeout_sec": 0, 00:23:05.918 "reconnect_delay_sec": 0, 00:23:05.918 "fast_io_fail_timeout_sec": 0, 00:23:05.918 "disable_auto_failback": false, 00:23:05.918 "generate_uuids": false, 00:23:05.918 "transport_tos": 0, 00:23:05.918 "nvme_error_stat": false, 00:23:05.918 "rdma_srq_size": 0, 00:23:05.918 "io_path_stat": false, 00:23:05.918 "allow_accel_sequence": false, 00:23:05.918 "rdma_max_cq_size": 0, 00:23:05.918 "rdma_cm_event_timeout_ms": 0, 00:23:05.918 "dhchap_digests": [ 00:23:05.918 "sha256", 00:23:05.918 "sha384", 00:23:05.918 "sha512" 00:23:05.918 ], 00:23:05.918 "dhchap_dhgroups": [ 00:23:05.918 "null", 00:23:05.918 "ffdhe2048", 00:23:05.918 "ffdhe3072", 00:23:05.918 "ffdhe4096", 00:23:05.918 "ffdhe6144", 00:23:05.918 "ffdhe8192" 00:23:05.918 ] 00:23:05.918 } 00:23:05.918 }, 00:23:05.918 { 00:23:05.918 "method": "bdev_nvme_attach_controller", 00:23:05.918 "params": { 00:23:05.918 "name": "TLSTEST", 00:23:05.918 "trtype": "TCP", 00:23:05.918 "adrfam": "IPv4", 00:23:05.918 "traddr": "10.0.0.2", 00:23:05.918 "trsvcid": "4420", 00:23:05.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.918 "prchk_reftag": false, 00:23:05.918 "prchk_guard": false, 00:23:05.918 "ctrlr_loss_timeout_sec": 0, 00:23:05.918 "reconnect_delay_sec": 0, 00:23:05.918 "fast_io_fail_timeout_sec": 0, 00:23:05.918 "psk": "key0", 00:23:05.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:05.918 "hdgst": false, 00:23:05.918 "ddgst": false, 00:23:05.918 "multipath": "multipath" 00:23:05.918 } 00:23:05.918 }, 00:23:05.918 { 00:23:05.918 "method": "bdev_nvme_set_hotplug", 00:23:05.918 "params": { 00:23:05.918 "period_us": 100000, 00:23:05.918 "enable": false 00:23:05.918 } 00:23:05.918 }, 00:23:05.918 { 00:23:05.918 "method": "bdev_wait_for_examine" 00:23:05.918 } 00:23:05.918 ] 00:23:05.918 }, 00:23:05.918 { 00:23:05.918 "subsystem": "nbd", 00:23:05.918 "config": [] 00:23:05.918 } 00:23:05.918 ] 00:23:05.918 }' 00:23:05.918 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.918 [2024-12-05 13:54:37.312322] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:05.918 [2024-12-05 13:54:37.312436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267331 ] 00:23:05.918 [2024-12-05 13:54:37.381111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.918 [2024-12-05 13:54:37.437937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.176 [2024-12-05 13:54:37.619217] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:06.434 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.434 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:06.434 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:06.434 Running I/O for 10 seconds... 00:23:08.787 3404.00 IOPS, 13.30 MiB/s [2024-12-05T12:54:40.903Z] 3478.00 IOPS, 13.59 MiB/s [2024-12-05T12:54:42.288Z] 3473.33 IOPS, 13.57 MiB/s [2024-12-05T12:54:43.220Z] 3499.75 IOPS, 13.67 MiB/s [2024-12-05T12:54:44.173Z] 3496.60 IOPS, 13.66 MiB/s [2024-12-05T12:54:45.107Z] 3502.50 IOPS, 13.68 MiB/s [2024-12-05T12:54:46.039Z] 3517.57 IOPS, 13.74 MiB/s [2024-12-05T12:54:46.970Z] 3520.00 IOPS, 13.75 MiB/s [2024-12-05T12:54:47.903Z] 3523.33 IOPS, 13.76 MiB/s [2024-12-05T12:54:47.903Z] 3513.20 IOPS, 13.72 MiB/s 00:23:16.377 Latency(us) 00:23:16.377 [2024-12-05T12:54:47.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.377 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:16.377 Verification LBA range: start 0x0 length 0x2000 00:23:16.377 TLSTESTn1 : 10.02 3518.77 13.75 0.00 0.00 36316.05 7281.78 47185.92 00:23:16.377 [2024-12-05T12:54:47.903Z] =================================================================================================================== 00:23:16.377 [2024-12-05T12:54:47.903Z] Total : 3518.77 13.75 0.00 0.00 36316.05 7281.78 47185.92 00:23:16.377 { 00:23:16.377 "results": [ 00:23:16.377 { 00:23:16.377 "job": "TLSTESTn1", 00:23:16.377 "core_mask": "0x4", 00:23:16.377 "workload": "verify", 00:23:16.377 "status": "finished", 00:23:16.377 "verify_range": { 00:23:16.377 "start": 0, 00:23:16.377 "length": 8192 00:23:16.377 }, 00:23:16.377 "queue_depth": 128, 00:23:16.377 "io_size": 4096, 00:23:16.377 "runtime": 10.020249, 00:23:16.377 "iops": 3518.7748328409803, 00:23:16.377 "mibps": 13.745214190785079, 00:23:16.377 "io_failed": 0, 00:23:16.377 "io_timeout": 0, 00:23:16.377 "avg_latency_us": 36316.045310081056, 00:23:16.377 "min_latency_us": 7281.777777777777, 00:23:16.377 "max_latency_us": 47185.92 00:23:16.377 } 00:23:16.377 ], 00:23:16.377 "core_count": 1 00:23:16.377 } 00:23:16.377 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.377 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2267331 00:23:16.377 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2267331 ']' 00:23:16.377 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2267331 00:23:16.377 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:16.635 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.635 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2267331 00:23:16.635 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:16.635 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:16.635 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2267331' 00:23:16.635 killing process with pid 2267331 00:23:16.635 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2267331 00:23:16.635 Received shutdown signal, test time was about 10.000000 seconds 00:23:16.635 00:23:16.635 Latency(us) 00:23:16.635 [2024-12-05T12:54:48.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.635 [2024-12-05T12:54:48.161Z] =================================================================================================================== 00:23:16.635 [2024-12-05T12:54:48.161Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.635 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2267331 00:23:16.904 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2267178 00:23:16.904 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2267178 ']' 00:23:16.905 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2267178 00:23:16.905 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:16.905 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.905 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2267178 00:23:16.905 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:16.905 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:16.905 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2267178' 00:23:16.905 killing process with pid 2267178 00:23:16.905 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2267178 00:23:16.905 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2267178 00:23:17.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:17.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:17.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2268597 00:23:17.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:17.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2268597 00:23:17.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2268597 ']' 00:23:17.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.164 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.164 [2024-12-05 13:54:48.495900] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:17.164 [2024-12-05 13:54:48.495987] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.164 [2024-12-05 13:54:48.566104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.164 [2024-12-05 13:54:48.618811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.164 [2024-12-05 13:54:48.618858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.164 [2024-12-05 13:54:48.618886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.164 [2024-12-05 13:54:48.618898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.164 [2024-12-05 13:54:48.618907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.164 [2024-12-05 13:54:48.619449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.421 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.421 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:17.421 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:17.421 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:17.421 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.421 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.421 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.9mE94VFsKi 00:23:17.421 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9mE94VFsKi 00:23:17.421 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:17.678 [2024-12-05 13:54:49.003849] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.678 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:17.936 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:18.193 [2024-12-05 13:54:49.525244] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:18.193 [2024-12-05 13:54:49.525523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.193 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:18.451 malloc0 00:23:18.451 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:18.709 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9mE94VFsKi 00:23:18.967 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:19.225 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2268844 00:23:19.225 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:19.225 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.225 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2268844 /var/tmp/bdevperf.sock 00:23:19.225 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2268844 ']' 00:23:19.225 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.225 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.225 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.225 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.225 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.225 [2024-12-05 13:54:50.674547] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:19.225 [2024-12-05 13:54:50.674629] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268844 ] 00:23:19.225 [2024-12-05 13:54:50.745357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.483 [2024-12-05 13:54:50.804025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.483 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.483 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:19.483 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9mE94VFsKi 00:23:19.741 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:19.999 [2024-12-05 13:54:51.432131] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.999 nvme0n1 00:23:20.257 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:20.257 Running I/O for 1 seconds... 00:23:21.192 3258.00 IOPS, 12.73 MiB/s 00:23:21.192 Latency(us) 00:23:21.192 [2024-12-05T12:54:52.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.192 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:21.192 Verification LBA range: start 0x0 length 0x2000 00:23:21.192 nvme0n1 : 1.02 3314.55 12.95 0.00 0.00 38250.14 9175.04 35535.08 00:23:21.192 [2024-12-05T12:54:52.718Z] =================================================================================================================== 00:23:21.192 [2024-12-05T12:54:52.718Z] Total : 3314.55 12.95 0.00 0.00 38250.14 9175.04 35535.08 00:23:21.192 { 00:23:21.192 "results": [ 00:23:21.192 { 00:23:21.192 "job": "nvme0n1", 00:23:21.192 "core_mask": "0x2", 00:23:21.192 "workload": "verify", 00:23:21.192 "status": "finished", 00:23:21.192 "verify_range": { 00:23:21.192 "start": 0, 00:23:21.192 "length": 8192 00:23:21.192 }, 00:23:21.192 "queue_depth": 128, 00:23:21.192 "io_size": 4096, 00:23:21.192 "runtime": 1.021859, 00:23:21.192 "iops": 3314.5473103432078, 00:23:21.192 "mibps": 12.947450431028155, 00:23:21.192 "io_failed": 0, 00:23:21.192 "io_timeout": 0, 00:23:21.192 "avg_latency_us": 38250.14455532592, 00:23:21.192 "min_latency_us": 9175.04, 00:23:21.192 "max_latency_us": 35535.07555555556 00:23:21.192 } 00:23:21.192 ], 00:23:21.192 "core_count": 1 00:23:21.192 } 00:23:21.192 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2268844 00:23:21.192 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2268844 ']' 00:23:21.192 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2268844 00:23:21.192 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.192 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.192 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2268844 00:23:21.451 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:21.451 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:21.451 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2268844' 00:23:21.451 killing process with pid 2268844 00:23:21.451 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2268844 00:23:21.451 Received shutdown signal, test time was about 1.000000 seconds 00:23:21.451 00:23:21.451 Latency(us) 00:23:21.451 [2024-12-05T12:54:52.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.451 [2024-12-05T12:54:52.977Z] =================================================================================================================== 00:23:21.451 [2024-12-05T12:54:52.977Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.451 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2268844 00:23:21.451 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2268597 00:23:21.451 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2268597 ']' 00:23:21.451 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2268597 00:23:21.451 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.451 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.451 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2268597 00:23:21.711 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:21.711 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:21.711 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2268597' 00:23:21.711 killing process with pid 2268597 00:23:21.711 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2268597 00:23:21.711 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2268597 00:23:21.711 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:21.711 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.711 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.711 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.711 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2269238 00:23:21.711 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:21.711 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2269238 00:23:21.711 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2269238 ']' 00:23:21.711 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.711 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.711 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.711 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.711 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.969 [2024-12-05 13:54:53.269597] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:21.970 [2024-12-05 13:54:53.269685] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.970 [2024-12-05 13:54:53.341718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.970 [2024-12-05 13:54:53.394971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.970 [2024-12-05 13:54:53.395034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.970 [2024-12-05 13:54:53.395047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.970 [2024-12-05 13:54:53.395057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.970 [2024-12-05 13:54:53.395066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.970 [2024-12-05 13:54:53.395675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.228 [2024-12-05 13:54:53.538821] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.228 malloc0 00:23:22.228 [2024-12-05 13:54:53.570548] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.228 [2024-12-05 13:54:53.570848] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2269263 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2269263 /var/tmp/bdevperf.sock 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2269263 ']' 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.228 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.228 [2024-12-05 13:54:53.642370] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:22.228 [2024-12-05 13:54:53.642455] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269263 ] 00:23:22.228 [2024-12-05 13:54:53.708712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.486 [2024-12-05 13:54:53.766136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.486 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.486 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:22.486 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9mE94VFsKi 00:23:22.744 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:23.002 [2024-12-05 13:54:54.420387] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.002 nvme0n1 00:23:23.002 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:23.259 Running I/O for 1 seconds... 00:23:24.191 3448.00 IOPS, 13.47 MiB/s 00:23:24.191 Latency(us) 00:23:24.191 [2024-12-05T12:54:55.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.191 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:24.191 Verification LBA range: start 0x0 length 0x2000 00:23:24.191 nvme0n1 : 1.02 3504.17 13.69 0.00 0.00 36173.93 6456.51 27185.30 00:23:24.191 [2024-12-05T12:54:55.717Z] =================================================================================================================== 00:23:24.191 [2024-12-05T12:54:55.717Z] Total : 3504.17 13.69 0.00 0.00 36173.93 6456.51 27185.30 00:23:24.191 { 00:23:24.191 "results": [ 00:23:24.191 { 00:23:24.191 "job": "nvme0n1", 00:23:24.191 "core_mask": "0x2", 00:23:24.191 "workload": "verify", 00:23:24.191 "status": "finished", 00:23:24.191 "verify_range": { 00:23:24.191 "start": 0, 00:23:24.191 "length": 8192 00:23:24.191 }, 00:23:24.191 "queue_depth": 128, 00:23:24.191 "io_size": 4096, 00:23:24.191 "runtime": 1.020497, 00:23:24.191 "iops": 3504.1749265308963, 00:23:24.191 "mibps": 13.688183306761314, 00:23:24.191 "io_failed": 0, 00:23:24.191 "io_timeout": 0, 00:23:24.191 "avg_latency_us": 36173.93480818627, 00:23:24.191 "min_latency_us": 6456.50962962963, 00:23:24.191 "max_latency_us": 27185.303703703703 00:23:24.191 } 00:23:24.191 ], 00:23:24.191 "core_count": 1 00:23:24.191 } 00:23:24.191 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:24.191 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.191 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.449 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.449 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:24.449 "subsystems": [ 00:23:24.449 { 00:23:24.449 "subsystem": "keyring", 00:23:24.449 "config": [ 00:23:24.449 { 00:23:24.449 "method": "keyring_file_add_key", 00:23:24.449 "params": { 00:23:24.449 "name": "key0", 00:23:24.449 "path": "/tmp/tmp.9mE94VFsKi" 00:23:24.449 } 00:23:24.449 } 00:23:24.449 ] 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "subsystem": "iobuf", 00:23:24.449 "config": [ 00:23:24.449 { 00:23:24.449 "method": "iobuf_set_options", 00:23:24.449 "params": { 00:23:24.449 "small_pool_count": 8192, 00:23:24.449 "large_pool_count": 1024, 00:23:24.449 "small_bufsize": 8192, 00:23:24.449 "large_bufsize": 135168, 00:23:24.449 "enable_numa": false 00:23:24.449 } 00:23:24.449 } 00:23:24.449 ] 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "subsystem": "sock", 00:23:24.449 "config": [ 00:23:24.449 { 00:23:24.449 "method": "sock_set_default_impl", 00:23:24.449 "params": { 00:23:24.449 "impl_name": "posix" 00:23:24.449 } 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "method": "sock_impl_set_options", 00:23:24.449 "params": { 00:23:24.449 "impl_name": "ssl", 00:23:24.449 "recv_buf_size": 4096, 00:23:24.449 "send_buf_size": 4096, 00:23:24.449 "enable_recv_pipe": true, 00:23:24.449 "enable_quickack": false, 00:23:24.449 "enable_placement_id": 0, 00:23:24.449 "enable_zerocopy_send_server": true, 00:23:24.449 "enable_zerocopy_send_client": false, 00:23:24.449 "zerocopy_threshold": 0, 00:23:24.449 "tls_version": 0, 00:23:24.449 "enable_ktls": false 00:23:24.449 } 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "method": "sock_impl_set_options", 00:23:24.449 "params": { 00:23:24.449 "impl_name": "posix", 00:23:24.449 "recv_buf_size": 2097152, 00:23:24.449 "send_buf_size": 2097152, 00:23:24.449 "enable_recv_pipe": true, 00:23:24.449 "enable_quickack": false, 00:23:24.449 "enable_placement_id": 0, 00:23:24.449 "enable_zerocopy_send_server": true, 00:23:24.449 "enable_zerocopy_send_client": false, 00:23:24.449 "zerocopy_threshold": 0, 00:23:24.449 "tls_version": 0, 00:23:24.449 "enable_ktls": false 00:23:24.449 } 00:23:24.449 } 00:23:24.449 ] 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "subsystem": "vmd", 00:23:24.449 "config": [] 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "subsystem": "accel", 00:23:24.449 "config": [ 00:23:24.449 { 00:23:24.449 "method": "accel_set_options", 00:23:24.449 "params": { 00:23:24.449 "small_cache_size": 128, 00:23:24.449 "large_cache_size": 16, 00:23:24.449 "task_count": 2048, 00:23:24.449 "sequence_count": 2048, 00:23:24.449 "buf_count": 2048 00:23:24.449 } 00:23:24.449 } 00:23:24.449 ] 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "subsystem": "bdev", 00:23:24.449 "config": [ 00:23:24.449 { 00:23:24.449 "method": "bdev_set_options", 00:23:24.449 "params": { 00:23:24.449 "bdev_io_pool_size": 65535, 00:23:24.449 "bdev_io_cache_size": 256, 00:23:24.449 "bdev_auto_examine": true, 00:23:24.449 "iobuf_small_cache_size": 128, 00:23:24.449 "iobuf_large_cache_size": 16 00:23:24.449 } 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "method": "bdev_raid_set_options", 00:23:24.449 "params": { 00:23:24.449 "process_window_size_kb": 1024, 00:23:24.449 "process_max_bandwidth_mb_sec": 0 00:23:24.449 } 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "method": "bdev_iscsi_set_options", 00:23:24.449 "params": { 00:23:24.449 "timeout_sec": 30 00:23:24.449 } 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "method": "bdev_nvme_set_options", 00:23:24.449 "params": { 00:23:24.449 "action_on_timeout": "none", 00:23:24.449 "timeout_us": 0, 00:23:24.449 "timeout_admin_us": 0, 00:23:24.449 "keep_alive_timeout_ms": 10000, 00:23:24.449 "arbitration_burst": 0, 00:23:24.449 "low_priority_weight": 0, 00:23:24.449 "medium_priority_weight": 0, 00:23:24.449 "high_priority_weight": 0, 00:23:24.449 "nvme_adminq_poll_period_us": 10000, 00:23:24.449 "nvme_ioq_poll_period_us": 0, 00:23:24.449 "io_queue_requests": 0, 00:23:24.449 "delay_cmd_submit": true, 00:23:24.449 "transport_retry_count": 4, 00:23:24.449 "bdev_retry_count": 3, 00:23:24.449 "transport_ack_timeout": 0, 00:23:24.449 "ctrlr_loss_timeout_sec": 0, 00:23:24.449 "reconnect_delay_sec": 0, 00:23:24.449 "fast_io_fail_timeout_sec": 0, 00:23:24.449 "disable_auto_failback": false, 00:23:24.449 "generate_uuids": false, 00:23:24.449 "transport_tos": 0, 00:23:24.449 "nvme_error_stat": false, 00:23:24.449 "rdma_srq_size": 0, 00:23:24.449 "io_path_stat": false, 00:23:24.449 "allow_accel_sequence": false, 00:23:24.449 "rdma_max_cq_size": 0, 00:23:24.449 "rdma_cm_event_timeout_ms": 0, 00:23:24.449 "dhchap_digests": [ 00:23:24.449 "sha256", 00:23:24.449 "sha384", 00:23:24.449 "sha512" 00:23:24.449 ], 00:23:24.449 "dhchap_dhgroups": [ 00:23:24.449 "null", 00:23:24.449 "ffdhe2048", 00:23:24.449 "ffdhe3072", 00:23:24.449 "ffdhe4096", 00:23:24.449 "ffdhe6144", 00:23:24.449 "ffdhe8192" 00:23:24.449 ] 00:23:24.449 } 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "method": "bdev_nvme_set_hotplug", 00:23:24.449 "params": { 00:23:24.449 "period_us": 100000, 00:23:24.449 "enable": false 00:23:24.449 } 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "method": "bdev_malloc_create", 00:23:24.449 "params": { 00:23:24.449 "name": "malloc0", 00:23:24.449 "num_blocks": 8192, 00:23:24.449 "block_size": 4096, 00:23:24.449 "physical_block_size": 4096, 00:23:24.449 "uuid": "5ce1db43-a727-4d2d-8d3d-9bfb72f8bf0c", 00:23:24.449 "optimal_io_boundary": 0, 00:23:24.449 "md_size": 0, 00:23:24.449 "dif_type": 0, 00:23:24.449 "dif_is_head_of_md": false, 00:23:24.449 "dif_pi_format": 0 00:23:24.449 } 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "method": "bdev_wait_for_examine" 00:23:24.449 } 00:23:24.449 ] 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "subsystem": "nbd", 00:23:24.449 "config": [] 00:23:24.449 }, 00:23:24.449 { 00:23:24.449 "subsystem": "scheduler", 00:23:24.449 "config": [ 00:23:24.449 { 00:23:24.449 "method": "framework_set_scheduler", 00:23:24.450 "params": { 00:23:24.450 "name": "static" 00:23:24.450 } 00:23:24.450 } 00:23:24.450 ] 00:23:24.450 }, 00:23:24.450 { 00:23:24.450 "subsystem": "nvmf", 00:23:24.450 "config": [ 00:23:24.450 { 00:23:24.450 "method": "nvmf_set_config", 00:23:24.450 "params": { 00:23:24.450 "discovery_filter": "match_any", 00:23:24.450 "admin_cmd_passthru": { 00:23:24.450 "identify_ctrlr": false 00:23:24.450 }, 00:23:24.450 "dhchap_digests": [ 00:23:24.450 "sha256", 00:23:24.450 "sha384", 00:23:24.450 "sha512" 00:23:24.450 ], 00:23:24.450 "dhchap_dhgroups": [ 00:23:24.450 "null", 00:23:24.450 "ffdhe2048", 00:23:24.450 "ffdhe3072", 00:23:24.450 "ffdhe4096", 00:23:24.450 "ffdhe6144", 00:23:24.450 "ffdhe8192" 00:23:24.450 ] 00:23:24.450 } 00:23:24.450 }, 00:23:24.450 { 00:23:24.450 "method": "nvmf_set_max_subsystems", 00:23:24.450 "params": { 00:23:24.450 "max_subsystems": 1024 00:23:24.450 } 00:23:24.450 }, 00:23:24.450 { 00:23:24.450 "method": "nvmf_set_crdt", 00:23:24.450 "params": { 00:23:24.450 "crdt1": 0, 00:23:24.450 "crdt2": 0, 00:23:24.450 "crdt3": 0 00:23:24.450 } 00:23:24.450 }, 00:23:24.450 { 00:23:24.450 "method": "nvmf_create_transport", 00:23:24.450 "params": { 00:23:24.450 "trtype": "TCP", 00:23:24.450 "max_queue_depth": 128, 00:23:24.450 "max_io_qpairs_per_ctrlr": 127, 00:23:24.450 "in_capsule_data_size": 4096, 00:23:24.450 "max_io_size": 131072, 00:23:24.450 "io_unit_size": 131072, 00:23:24.450 "max_aq_depth": 128, 00:23:24.450 "num_shared_buffers": 511, 00:23:24.450 "buf_cache_size": 4294967295, 00:23:24.450 "dif_insert_or_strip": false, 00:23:24.450 "zcopy": false, 00:23:24.450 "c2h_success": false, 00:23:24.450 "sock_priority": 0, 00:23:24.450 "abort_timeout_sec": 1, 00:23:24.450 "ack_timeout": 0, 00:23:24.450 "data_wr_pool_size": 0 00:23:24.450 } 00:23:24.450 }, 00:23:24.450 { 00:23:24.450 "method": "nvmf_create_subsystem", 00:23:24.450 "params": { 00:23:24.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.450 "allow_any_host": false, 00:23:24.450 "serial_number": "00000000000000000000", 00:23:24.450 "model_number": "SPDK bdev Controller", 00:23:24.450 "max_namespaces": 32, 00:23:24.450 "min_cntlid": 1, 00:23:24.450 "max_cntlid": 65519, 00:23:24.450 "ana_reporting": false 00:23:24.450 } 00:23:24.450 }, 00:23:24.450 { 00:23:24.450 "method": "nvmf_subsystem_add_host", 00:23:24.450 "params": { 00:23:24.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.450 "host": "nqn.2016-06.io.spdk:host1", 00:23:24.450 "psk": "key0" 00:23:24.450 } 00:23:24.450 }, 00:23:24.450 { 00:23:24.450 "method": "nvmf_subsystem_add_ns", 00:23:24.450 "params": { 00:23:24.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.450 "namespace": { 00:23:24.450 "nsid": 1, 00:23:24.450 "bdev_name": "malloc0", 00:23:24.450 "nguid": "5CE1DB43A7274D2D8D3D9BFB72F8BF0C", 00:23:24.450 "uuid": "5ce1db43-a727-4d2d-8d3d-9bfb72f8bf0c", 00:23:24.450 "no_auto_visible": false 00:23:24.450 } 00:23:24.450 } 00:23:24.450 }, 00:23:24.450 { 00:23:24.450 "method": "nvmf_subsystem_add_listener", 00:23:24.450 "params": { 00:23:24.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.450 "listen_address": { 00:23:24.450 "trtype": "TCP", 00:23:24.450 "adrfam": "IPv4", 00:23:24.450 "traddr": "10.0.0.2", 00:23:24.450 "trsvcid": "4420" 00:23:24.450 }, 00:23:24.450 "secure_channel": false, 00:23:24.450 "sock_impl": "ssl" 00:23:24.450 } 00:23:24.450 } 00:23:24.450 ] 00:23:24.450 } 00:23:24.450 ] 00:23:24.450 }' 00:23:24.450 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:24.708 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:24.708 "subsystems": [ 00:23:24.708 { 00:23:24.708 "subsystem": "keyring", 00:23:24.708 "config": [ 00:23:24.708 { 00:23:24.708 "method": "keyring_file_add_key", 00:23:24.708 "params": { 00:23:24.708 "name": "key0", 00:23:24.708 "path": "/tmp/tmp.9mE94VFsKi" 00:23:24.708 } 00:23:24.708 } 00:23:24.708 ] 00:23:24.708 }, 00:23:24.708 { 00:23:24.708 "subsystem": "iobuf", 00:23:24.708 "config": [ 00:23:24.708 { 00:23:24.708 "method": "iobuf_set_options", 00:23:24.708 "params": { 00:23:24.708 "small_pool_count": 8192, 00:23:24.708 "large_pool_count": 1024, 00:23:24.708 "small_bufsize": 8192, 00:23:24.708 "large_bufsize": 135168, 00:23:24.708 "enable_numa": false 00:23:24.708 } 00:23:24.708 } 00:23:24.708 ] 00:23:24.708 }, 00:23:24.708 { 00:23:24.708 "subsystem": "sock", 00:23:24.708 "config": [ 00:23:24.708 { 00:23:24.708 "method": "sock_set_default_impl", 00:23:24.708 "params": { 00:23:24.708 "impl_name": "posix" 00:23:24.708 } 00:23:24.708 }, 00:23:24.708 { 00:23:24.708 "method": "sock_impl_set_options", 00:23:24.708 "params": { 00:23:24.708 "impl_name": "ssl", 00:23:24.708 "recv_buf_size": 4096, 00:23:24.708 "send_buf_size": 4096, 00:23:24.708 "enable_recv_pipe": true, 00:23:24.708 "enable_quickack": false, 00:23:24.708 "enable_placement_id": 0, 00:23:24.708 "enable_zerocopy_send_server": true, 00:23:24.708 "enable_zerocopy_send_client": false, 00:23:24.708 "zerocopy_threshold": 0, 00:23:24.708 "tls_version": 0, 00:23:24.708 "enable_ktls": false 00:23:24.708 } 00:23:24.708 }, 00:23:24.708 { 00:23:24.708 "method": "sock_impl_set_options", 00:23:24.708 "params": { 00:23:24.708 "impl_name": "posix", 00:23:24.708 "recv_buf_size": 2097152, 00:23:24.708 "send_buf_size": 2097152, 00:23:24.708 "enable_recv_pipe": true, 00:23:24.708 "enable_quickack": false, 00:23:24.708 "enable_placement_id": 0, 00:23:24.708 "enable_zerocopy_send_server": true, 00:23:24.708 "enable_zerocopy_send_client": false, 00:23:24.708 "zerocopy_threshold": 0, 00:23:24.708 "tls_version": 0, 00:23:24.708 "enable_ktls": false 00:23:24.708 } 00:23:24.708 } 00:23:24.708 ] 00:23:24.708 }, 00:23:24.708 { 00:23:24.708 "subsystem": "vmd", 00:23:24.708 "config": [] 00:23:24.709 }, 00:23:24.709 { 00:23:24.709 "subsystem": "accel", 00:23:24.709 "config": [ 00:23:24.709 { 00:23:24.709 "method": "accel_set_options", 00:23:24.709 "params": { 00:23:24.709 "small_cache_size": 128, 00:23:24.709 "large_cache_size": 16, 00:23:24.709 "task_count": 2048, 00:23:24.709 "sequence_count": 2048, 00:23:24.709 "buf_count": 2048 00:23:24.709 } 00:23:24.709 } 00:23:24.709 ] 00:23:24.709 }, 00:23:24.709 { 00:23:24.709 "subsystem": "bdev", 00:23:24.709 "config": [ 00:23:24.709 { 00:23:24.709 "method": "bdev_set_options", 00:23:24.709 "params": { 00:23:24.709 "bdev_io_pool_size": 65535, 00:23:24.709 "bdev_io_cache_size": 256, 00:23:24.709 "bdev_auto_examine": true, 00:23:24.709 "iobuf_small_cache_size": 128, 00:23:24.709 "iobuf_large_cache_size": 16 00:23:24.709 } 00:23:24.709 }, 00:23:24.709 { 00:23:24.709 "method": "bdev_raid_set_options", 00:23:24.709 "params": { 00:23:24.709 "process_window_size_kb": 1024, 00:23:24.709 "process_max_bandwidth_mb_sec": 0 00:23:24.709 } 00:23:24.709 }, 00:23:24.709 { 00:23:24.709 "method": "bdev_iscsi_set_options", 00:23:24.709 "params": { 00:23:24.709 "timeout_sec": 30 00:23:24.709 } 00:23:24.709 }, 00:23:24.709 { 00:23:24.709 "method": "bdev_nvme_set_options", 00:23:24.709 "params": { 00:23:24.709 "action_on_timeout": "none", 00:23:24.709 "timeout_us": 0, 00:23:24.709 "timeout_admin_us": 0, 00:23:24.709 "keep_alive_timeout_ms": 10000, 00:23:24.709 "arbitration_burst": 0, 00:23:24.709 "low_priority_weight": 0, 00:23:24.709 "medium_priority_weight": 0, 00:23:24.709 "high_priority_weight": 0, 00:23:24.709 "nvme_adminq_poll_period_us": 10000, 00:23:24.709 "nvme_ioq_poll_period_us": 0, 00:23:24.709 "io_queue_requests": 512, 00:23:24.709 "delay_cmd_submit": true, 00:23:24.709 "transport_retry_count": 4, 00:23:24.709 "bdev_retry_count": 3, 00:23:24.709 "transport_ack_timeout": 0, 00:23:24.709 "ctrlr_loss_timeout_sec": 0, 00:23:24.709 "reconnect_delay_sec": 0, 00:23:24.709 "fast_io_fail_timeout_sec": 0, 00:23:24.709 "disable_auto_failback": false, 00:23:24.709 "generate_uuids": false, 00:23:24.709 "transport_tos": 0, 00:23:24.709 "nvme_error_stat": false, 00:23:24.709 "rdma_srq_size": 0, 00:23:24.709 "io_path_stat": false, 00:23:24.709 "allow_accel_sequence": false, 00:23:24.709 "rdma_max_cq_size": 0, 00:23:24.709 "rdma_cm_event_timeout_ms": 0, 00:23:24.709 "dhchap_digests": [ 00:23:24.709 "sha256", 00:23:24.709 "sha384", 00:23:24.709 "sha512" 00:23:24.709 ], 00:23:24.709 "dhchap_dhgroups": [ 00:23:24.709 "null", 00:23:24.709 "ffdhe2048", 00:23:24.709 "ffdhe3072", 00:23:24.709 "ffdhe4096", 00:23:24.709 "ffdhe6144", 00:23:24.709 "ffdhe8192" 00:23:24.709 ] 00:23:24.709 } 00:23:24.709 }, 00:23:24.709 { 00:23:24.709 "method": "bdev_nvme_attach_controller", 00:23:24.709 "params": { 00:23:24.709 "name": "nvme0", 00:23:24.709 "trtype": "TCP", 00:23:24.709 "adrfam": "IPv4", 00:23:24.709 "traddr": "10.0.0.2", 00:23:24.709 "trsvcid": "4420", 00:23:24.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.709 "prchk_reftag": false, 00:23:24.709 "prchk_guard": false, 00:23:24.709 "ctrlr_loss_timeout_sec": 0, 00:23:24.709 "reconnect_delay_sec": 0, 00:23:24.709 "fast_io_fail_timeout_sec": 0, 00:23:24.709 "psk": "key0", 00:23:24.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:24.709 "hdgst": false, 00:23:24.709 "ddgst": false, 00:23:24.709 "multipath": "multipath" 00:23:24.709 } 00:23:24.709 }, 00:23:24.709 { 00:23:24.709 "method": "bdev_nvme_set_hotplug", 00:23:24.709 "params": { 00:23:24.709 "period_us": 100000, 00:23:24.709 "enable": false 00:23:24.709 } 00:23:24.709 }, 00:23:24.709 { 00:23:24.709 "method": "bdev_enable_histogram", 00:23:24.709 "params": { 00:23:24.709 "name": "nvme0n1", 00:23:24.709 "enable": true 00:23:24.709 } 00:23:24.709 }, 00:23:24.709 { 00:23:24.709 "method": "bdev_wait_for_examine" 00:23:24.709 } 00:23:24.709 ] 00:23:24.709 }, 00:23:24.709 { 00:23:24.709 "subsystem": "nbd", 00:23:24.709 "config": [] 00:23:24.709 } 00:23:24.709 ] 00:23:24.709 }' 00:23:24.709 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2269263 00:23:24.709 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2269263 ']' 00:23:24.709 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2269263 00:23:24.709 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:24.709 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.709 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2269263 00:23:24.709 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:24.709 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:24.709 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2269263' 00:23:24.709 killing process with pid 2269263 00:23:24.709 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2269263 00:23:24.709 Received shutdown signal, test time was about 1.000000 seconds 00:23:24.709 00:23:24.709 Latency(us) 00:23:24.709 [2024-12-05T12:54:56.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.710 [2024-12-05T12:54:56.236Z] =================================================================================================================== 00:23:24.710 [2024-12-05T12:54:56.236Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:24.710 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2269263 00:23:24.967 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2269238 00:23:24.967 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2269238 ']' 00:23:24.967 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2269238 00:23:24.967 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:24.967 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.967 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2269238 00:23:24.967 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:24.967 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:24.967 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2269238' 00:23:24.967 killing process with pid 2269238 00:23:24.967 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2269238 00:23:24.967 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2269238 00:23:25.226 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:25.226 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.226 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:25.226 "subsystems": [ 00:23:25.226 { 00:23:25.226 "subsystem": "keyring", 00:23:25.226 "config": [ 00:23:25.226 { 00:23:25.226 "method": "keyring_file_add_key", 00:23:25.226 "params": { 00:23:25.226 "name": "key0", 00:23:25.226 "path": "/tmp/tmp.9mE94VFsKi" 00:23:25.226 } 00:23:25.226 } 00:23:25.226 ] 00:23:25.226 }, 00:23:25.226 { 00:23:25.226 "subsystem": "iobuf", 00:23:25.226 "config": [ 00:23:25.226 { 00:23:25.226 "method": "iobuf_set_options", 00:23:25.226 "params": { 00:23:25.226 "small_pool_count": 8192, 00:23:25.226 "large_pool_count": 1024, 00:23:25.226 "small_bufsize": 8192, 00:23:25.226 "large_bufsize": 135168, 00:23:25.226 "enable_numa": false 00:23:25.226 } 00:23:25.226 } 00:23:25.226 ] 00:23:25.226 }, 00:23:25.226 { 00:23:25.226 "subsystem": "sock", 00:23:25.226 "config": [ 00:23:25.226 { 00:23:25.226 "method": "sock_set_default_impl", 00:23:25.226 "params": { 00:23:25.226 "impl_name": "posix" 00:23:25.226 } 00:23:25.226 }, 00:23:25.226 { 00:23:25.226 "method": "sock_impl_set_options", 00:23:25.226 "params": { 00:23:25.226 "impl_name": "ssl", 00:23:25.226 "recv_buf_size": 4096, 00:23:25.226 "send_buf_size": 4096, 00:23:25.226 "enable_recv_pipe": true, 00:23:25.226 "enable_quickack": false, 00:23:25.226 "enable_placement_id": 0, 00:23:25.226 "enable_zerocopy_send_server": true, 00:23:25.226 "enable_zerocopy_send_client": false, 00:23:25.226 "zerocopy_threshold": 0, 00:23:25.226 "tls_version": 0, 00:23:25.226 "enable_ktls": false 00:23:25.226 } 00:23:25.226 }, 00:23:25.226 { 00:23:25.226 "method": "sock_impl_set_options", 00:23:25.226 "params": { 00:23:25.226 "impl_name": "posix", 00:23:25.226 "recv_buf_size": 2097152, 00:23:25.226 "send_buf_size": 2097152, 00:23:25.226 "enable_recv_pipe": true, 00:23:25.226 "enable_quickack": false, 00:23:25.226 "enable_placement_id": 0, 00:23:25.226 "enable_zerocopy_send_server": true, 00:23:25.226 "enable_zerocopy_send_client": false, 00:23:25.226 "zerocopy_threshold": 0, 00:23:25.226 "tls_version": 0, 00:23:25.226 "enable_ktls": false 00:23:25.226 } 00:23:25.226 } 00:23:25.226 ] 00:23:25.226 }, 00:23:25.226 { 00:23:25.226 "subsystem": "vmd", 00:23:25.226 "config": [] 00:23:25.226 }, 00:23:25.226 { 00:23:25.226 "subsystem": "accel", 00:23:25.226 "config": [ 00:23:25.226 { 00:23:25.226 "method": "accel_set_options", 00:23:25.226 "params": { 00:23:25.226 "small_cache_size": 128, 00:23:25.226 "large_cache_size": 16, 00:23:25.226 "task_count": 2048, 00:23:25.226 "sequence_count": 2048, 00:23:25.226 "buf_count": 2048 00:23:25.226 } 00:23:25.226 } 00:23:25.226 ] 00:23:25.226 }, 00:23:25.226 { 00:23:25.226 "subsystem": "bdev", 00:23:25.226 "config": [ 00:23:25.226 { 00:23:25.226 "method": "bdev_set_options", 00:23:25.226 "params": { 00:23:25.226 "bdev_io_pool_size": 65535, 00:23:25.226 "bdev_io_cache_size": 256, 00:23:25.226 "bdev_auto_examine": true, 00:23:25.226 "iobuf_small_cache_size": 128, 00:23:25.226 "iobuf_large_cache_size": 16 00:23:25.226 } 00:23:25.226 }, 00:23:25.226 { 00:23:25.226 "method": "bdev_raid_set_options", 00:23:25.226 "params": { 00:23:25.226 "process_window_size_kb": 1024, 00:23:25.226 "process_max_bandwidth_mb_sec": 0 00:23:25.226 } 00:23:25.226 }, 00:23:25.226 { 00:23:25.226 "method": "bdev_iscsi_set_options", 00:23:25.226 "params": { 00:23:25.226 "timeout_sec": 30 00:23:25.226 } 00:23:25.226 }, 00:23:25.226 { 00:23:25.226 "method": "bdev_nvme_set_options", 00:23:25.226 "params": { 00:23:25.226 "action_on_timeout": "none", 00:23:25.226 "timeout_us": 0, 00:23:25.226 "timeout_admin_us": 0, 00:23:25.226 "keep_alive_timeout_ms": 10000, 00:23:25.226 "arbitration_burst": 0, 00:23:25.226 "low_priority_weight": 0, 00:23:25.226 "medium_priority_weight": 0, 00:23:25.226 "high_priority_weight": 0, 00:23:25.226 "nvme_adminq_poll_period_us": 10000, 00:23:25.226 "nvme_ioq_poll_period_us": 0, 00:23:25.226 "io_queue_requests": 0, 00:23:25.226 "delay_cmd_submit": true, 00:23:25.226 "transport_retry_count": 4, 00:23:25.226 "bdev_retry_count": 3, 00:23:25.226 "transport_ack_timeout": 0, 00:23:25.226 "ctrlr_loss_timeout_sec": 0, 00:23:25.226 "reconnect_delay_sec": 0, 00:23:25.226 "fast_io_fail_timeout_sec": 0, 00:23:25.226 "disable_auto_failback": false, 00:23:25.226 "generate_uuids": false, 00:23:25.226 "transport_tos": 0, 00:23:25.226 "nvme_error_stat": false, 00:23:25.226 "rdma_srq_size": 0, 00:23:25.226 "io_path_stat": false, 00:23:25.226 "allow_accel_sequence": false, 00:23:25.226 "rdma_max_cq_size": 0, 00:23:25.226 "rdma_cm_event_timeout_ms": 0, 00:23:25.226 "dhchap_digests": [ 00:23:25.226 "sha256", 00:23:25.226 "sha384", 00:23:25.226 "sha512" 00:23:25.226 ], 00:23:25.226 "dhchap_dhgroups": [ 00:23:25.226 "null", 00:23:25.226 "ffdhe2048", 00:23:25.226 "ffdhe3072", 00:23:25.226 "ffdhe4096", 00:23:25.226 "ffdhe6144", 00:23:25.226 "ffdhe8192" 00:23:25.226 ] 00:23:25.226 } 00:23:25.226 }, 00:23:25.226 { 00:23:25.226 "method": "bdev_nvme_set_hotplug", 00:23:25.226 "params": { 00:23:25.226 "period_us": 100000, 00:23:25.226 "enable": false 00:23:25.226 } 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "method": "bdev_malloc_create", 00:23:25.227 "params": { 00:23:25.227 "name": "malloc0", 00:23:25.227 "num_blocks": 8192, 00:23:25.227 "block_size": 4096, 00:23:25.227 "physical_block_size": 4096, 00:23:25.227 "uuid": "5ce1db43-a727-4d2d-8d3d-9bfb72f8bf0c", 00:23:25.227 "optimal_io_boundary": 0, 00:23:25.227 "md_size": 0, 00:23:25.227 "dif_type": 0, 00:23:25.227 "dif_is_head_of_md": false, 00:23:25.227 "dif_pi_format": 0 00:23:25.227 } 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "method": "bdev_wait_for_examine" 00:23:25.227 } 00:23:25.227 ] 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "subsystem": "nbd", 00:23:25.227 "config": [] 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "subsystem": "scheduler", 00:23:25.227 "config": [ 00:23:25.227 { 00:23:25.227 "method": "framework_set_scheduler", 00:23:25.227 "params": { 00:23:25.227 "name": "static" 00:23:25.227 } 00:23:25.227 } 00:23:25.227 ] 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "subsystem": "nvmf", 00:23:25.227 "config": [ 00:23:25.227 { 00:23:25.227 "method": "nvmf_set_config", 00:23:25.227 "params": { 00:23:25.227 "discovery_filter": "match_any", 00:23:25.227 "admin_cmd_passthru": { 00:23:25.227 "identify_ctrlr": false 00:23:25.227 }, 00:23:25.227 "dhchap_digests": [ 00:23:25.227 "sha256", 00:23:25.227 "sha384", 00:23:25.227 "sha512" 00:23:25.227 ], 00:23:25.227 "dhchap_dhgroups": [ 00:23:25.227 "null", 00:23:25.227 "ffdhe2048", 00:23:25.227 "ffdhe3072", 00:23:25.227 "ffdhe4096", 00:23:25.227 "ffdhe6144", 00:23:25.227 "ffdhe8192" 00:23:25.227 ] 00:23:25.227 } 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "method": "nvmf_set_max_subsystems", 00:23:25.227 "params": { 00:23:25.227 "max_subsystems": 1024 00:23:25.227 } 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "method": "nvmf_set_crdt", 00:23:25.227 "params": { 00:23:25.227 "crdt1": 0, 00:23:25.227 "crdt2": 0, 00:23:25.227 "crdt3": 0 00:23:25.227 } 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "method": "nvmf_create_transport", 00:23:25.227 "params": { 00:23:25.227 "trtype": "TCP", 00:23:25.227 "max_queue_depth": 128, 00:23:25.227 "max_io_qpairs_per_ctrlr": 127, 00:23:25.227 "in_capsule_data_size": 4096, 00:23:25.227 "max_io_size": 131072, 00:23:25.227 "io_unit_size": 131072, 00:23:25.227 "max_aq_depth": 128, 00:23:25.227 "num_shared_buffers": 511, 00:23:25.227 "buf_cache_size": 4294967295, 00:23:25.227 "dif_insert_or_strip": false, 00:23:25.227 "zcopy": false, 00:23:25.227 "c2h_success": false, 00:23:25.227 "sock_priority": 0, 00:23:25.227 "abort_timeout_sec": 1, 00:23:25.227 "ack_timeout": 0, 00:23:25.227 "data_wr_pool_size": 0 00:23:25.227 } 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "method": "nvmf_create_subsystem", 00:23:25.227 "params": { 00:23:25.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.227 "allow_any_host": false, 00:23:25.227 "serial_number": "00000000000000000000", 00:23:25.227 "model_number": "SPDK bdev Controller", 00:23:25.227 "max_namespaces": 32, 00:23:25.227 "min_cntlid": 1, 00:23:25.227 "max_cntlid": 65519, 00:23:25.227 "ana_reporting": false 00:23:25.227 } 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "method": "nvmf_subsystem_add_host", 00:23:25.227 "params": { 00:23:25.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.227 "host": "nqn.2016-06.io.spdk:host1", 00:23:25.227 "psk": "key0" 00:23:25.227 } 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "method": "nvmf_subsystem_add_ns", 00:23:25.227 "params": { 00:23:25.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.227 "namespace": { 00:23:25.227 "nsid": 1, 00:23:25.227 "bdev_name": "malloc0", 00:23:25.227 "nguid": "5CE1DB43A7274D2D8D3D9BFB72F8BF0C", 00:23:25.227 "uuid": "5ce1db43-a727-4d2d-8d3d-9bfb72f8bf0c", 00:23:25.227 "no_auto_visible": false 00:23:25.227 } 00:23:25.227 } 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "method": "nvmf_subsystem_add_listener", 00:23:25.227 "params": { 00:23:25.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.227 "listen_address": { 00:23:25.227 "trtype": "TCP", 00:23:25.227 "adrfam": "IPv4", 00:23:25.227 "traddr": "10.0.0.2", 00:23:25.227 "trsvcid": "4420" 00:23:25.227 }, 00:23:25.227 "secure_channel": false, 00:23:25.227 "sock_impl": "ssl" 00:23:25.227 } 00:23:25.227 } 00:23:25.227 ] 00:23:25.227 } 00:23:25.227 ] 00:23:25.227 }' 00:23:25.227 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.227 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.227 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2269670 00:23:25.227 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:25.227 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2269670 00:23:25.227 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2269670 ']' 00:23:25.227 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.227 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.227 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.227 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.227 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.227 [2024-12-05 13:54:56.742645] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:25.227 [2024-12-05 13:54:56.742744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.484 [2024-12-05 13:54:56.813550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.484 [2024-12-05 13:54:56.866360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.484 [2024-12-05 13:54:56.866426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.484 [2024-12-05 13:54:56.866457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.484 [2024-12-05 13:54:56.866468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.484 [2024-12-05 13:54:56.866477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.484 [2024-12-05 13:54:56.867074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.741 [2024-12-05 13:54:57.108663] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.741 [2024-12-05 13:54:57.140695] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.741 [2024-12-05 13:54:57.140923] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.306 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.306 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:26.306 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:26.306 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.306 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.306 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.306 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2269827 00:23:26.306 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2269827 /var/tmp/bdevperf.sock 00:23:26.306 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2269827 ']' 00:23:26.306 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.306 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:26.306 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.306 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:26.306 "subsystems": [ 00:23:26.306 { 00:23:26.306 "subsystem": "keyring", 00:23:26.307 "config": [ 00:23:26.307 { 00:23:26.307 "method": "keyring_file_add_key", 00:23:26.307 "params": { 00:23:26.307 "name": "key0", 00:23:26.307 "path": "/tmp/tmp.9mE94VFsKi" 00:23:26.307 } 00:23:26.307 } 00:23:26.307 ] 00:23:26.307 }, 00:23:26.307 { 00:23:26.307 "subsystem": "iobuf", 00:23:26.307 "config": [ 00:23:26.307 { 00:23:26.307 "method": "iobuf_set_options", 00:23:26.307 "params": { 00:23:26.307 "small_pool_count": 8192, 00:23:26.307 "large_pool_count": 1024, 00:23:26.307 "small_bufsize": 8192, 00:23:26.307 "large_bufsize": 135168, 00:23:26.307 "enable_numa": false 00:23:26.307 } 00:23:26.307 } 00:23:26.307 ] 00:23:26.307 }, 00:23:26.307 { 00:23:26.307 "subsystem": "sock", 00:23:26.307 "config": [ 00:23:26.307 { 00:23:26.307 "method": "sock_set_default_impl", 00:23:26.307 "params": { 00:23:26.307 "impl_name": "posix" 00:23:26.307 } 00:23:26.307 }, 00:23:26.307 { 00:23:26.307 "method": "sock_impl_set_options", 00:23:26.307 "params": { 00:23:26.307 "impl_name": "ssl", 00:23:26.307 "recv_buf_size": 4096, 00:23:26.307 "send_buf_size": 4096, 00:23:26.307 "enable_recv_pipe": true, 00:23:26.307 "enable_quickack": false, 00:23:26.307 "enable_placement_id": 0, 00:23:26.307 "enable_zerocopy_send_server": true, 00:23:26.307 "enable_zerocopy_send_client": false, 00:23:26.307 "zerocopy_threshold": 0, 00:23:26.307 "tls_version": 0, 00:23:26.307 "enable_ktls": false 00:23:26.307 } 00:23:26.307 }, 00:23:26.307 { 00:23:26.307 "method": "sock_impl_set_options", 00:23:26.307 "params": { 00:23:26.307 "impl_name": "posix", 00:23:26.307 "recv_buf_size": 2097152, 00:23:26.307 "send_buf_size": 2097152, 00:23:26.307 "enable_recv_pipe": true, 00:23:26.307 "enable_quickack": false, 00:23:26.307 "enable_placement_id": 0, 00:23:26.307 "enable_zerocopy_send_server": true, 00:23:26.307 "enable_zerocopy_send_client": false, 00:23:26.307 "zerocopy_threshold": 0, 00:23:26.307 "tls_version": 0, 00:23:26.307 "enable_ktls": false 00:23:26.307 } 00:23:26.307 } 00:23:26.307 ] 00:23:26.307 }, 00:23:26.307 { 00:23:26.307 "subsystem": "vmd", 00:23:26.307 "config": [] 00:23:26.307 }, 00:23:26.307 { 00:23:26.307 "subsystem": "accel", 00:23:26.307 "config": [ 00:23:26.307 { 00:23:26.307 "method": "accel_set_options", 00:23:26.307 "params": { 00:23:26.307 "small_cache_size": 128, 00:23:26.307 "large_cache_size": 16, 00:23:26.307 "task_count": 2048, 00:23:26.307 "sequence_count": 2048, 00:23:26.307 "buf_count": 2048 00:23:26.307 } 00:23:26.307 } 00:23:26.307 ] 00:23:26.307 }, 00:23:26.307 { 00:23:26.307 "subsystem": "bdev", 00:23:26.307 "config": [ 00:23:26.307 { 00:23:26.307 "method": "bdev_set_options", 00:23:26.307 "params": { 00:23:26.307 "bdev_io_pool_size": 65535, 00:23:26.307 "bdev_io_cache_size": 256, 00:23:26.307 "bdev_auto_examine": true, 00:23:26.307 "iobuf_small_cache_size": 128, 00:23:26.307 "iobuf_large_cache_size": 16 00:23:26.307 } 00:23:26.307 }, 00:23:26.307 { 00:23:26.307 "method": "bdev_raid_set_options", 00:23:26.307 "params": { 00:23:26.307 "process_window_size_kb": 1024, 00:23:26.307 "process_max_bandwidth_mb_sec": 0 00:23:26.307 } 00:23:26.307 }, 00:23:26.307 { 00:23:26.307 "method": "bdev_iscsi_set_options", 00:23:26.307 "params": { 00:23:26.307 "timeout_sec": 30 00:23:26.307 } 00:23:26.307 }, 00:23:26.307 { 00:23:26.307 "method": "bdev_nvme_set_options", 00:23:26.307 "params": { 00:23:26.307 "action_on_timeout": "none", 00:23:26.307 "timeout_us": 0, 00:23:26.307 "timeout_admin_us": 0, 00:23:26.307 "keep_alive_timeout_ms": 10000, 00:23:26.307 "arbitration_burst": 0, 00:23:26.307 "low_priority_weight": 0, 00:23:26.307 "medium_priority_weight": 0, 00:23:26.307 "high_priority_weight": 0, 00:23:26.307 "nvme_adminq_poll_period_us": 10000, 00:23:26.307 "nvme_ioq_poll_period_us": 0, 00:23:26.307 "io_queue_requests": 512, 00:23:26.307 "delay_cmd_submit": true, 00:23:26.307 "transport_retry_count": 4, 00:23:26.307 "bdev_retry_count": 3, 00:23:26.307 "transport_ack_timeout": 0, 00:23:26.307 "ctrlr_loss_timeout_sec": 0, 00:23:26.307 "reconnect_delay_sec": 0, 00:23:26.307 "fast_io_fail_timeout_sec": 0, 00:23:26.307 "disable_auto_failback": false, 00:23:26.307 "generate_uuids": false, 00:23:26.307 "transport_tos": 0, 00:23:26.307 "nvme_error_stat": false, 00:23:26.307 "rdma_srq_size": 0, 00:23:26.307 "io_path_stat": false, 00:23:26.307 "allow_accel_sequence": false, 00:23:26.307 "rdma_max_cq_size": 0, 00:23:26.307 "rdma_cm_event_timeout_ms": 0 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.307 , 00:23:26.308 "dhchap_digests": [ 00:23:26.308 "sha256", 00:23:26.308 "sha384", 00:23:26.308 "sha512" 00:23:26.308 ], 00:23:26.308 "dhchap_dhgroups": [ 00:23:26.308 "null", 00:23:26.308 "ffdhe2048", 00:23:26.308 "ffdhe3072", 00:23:26.308 "ffdhe4096", 00:23:26.308 "ffdhe6144", 00:23:26.308 "ffdhe8192" 00:23:26.308 ] 00:23:26.308 } 00:23:26.308 }, 00:23:26.308 { 00:23:26.308 "method": "bdev_nvme_attach_controller", 00:23:26.308 "params": { 00:23:26.308 "name": "nvme0", 00:23:26.308 "trtype": "TCP", 00:23:26.308 "adrfam": "IPv4", 00:23:26.308 "traddr": "10.0.0.2", 00:23:26.308 "trsvcid": "4420", 00:23:26.308 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.308 "prchk_reftag": false, 00:23:26.308 "prchk_guard": false, 00:23:26.308 "ctrlr_loss_timeout_sec": 0, 00:23:26.308 "reconnect_delay_sec": 0, 00:23:26.308 "fast_io_fail_timeout_sec": 0, 00:23:26.308 "psk": "key0", 00:23:26.308 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.308 "hdgst": false, 00:23:26.308 "ddgst": false, 00:23:26.308 "multipath": "multipath" 00:23:26.308 } 00:23:26.308 }, 00:23:26.308 { 00:23:26.308 "method": "bdev_nvme_set_hotplug", 00:23:26.308 "params": { 00:23:26.308 "period_us": 100000, 00:23:26.308 "enable": false 00:23:26.308 } 00:23:26.308 }, 00:23:26.308 { 00:23:26.308 "method": "bdev_enable_histogram", 00:23:26.308 "params": { 00:23:26.308 "name": "nvme0n1", 00:23:26.308 "enable": true 00:23:26.308 } 00:23:26.308 }, 00:23:26.308 { 00:23:26.308 "method": "bdev_wait_for_examine" 00:23:26.308 } 00:23:26.308 ] 00:23:26.308 }, 00:23:26.308 { 00:23:26.308 "subsystem": "nbd", 00:23:26.308 "config": [] 00:23:26.308 } 00:23:26.308 ] 00:23:26.308 }' 00:23:26.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.308 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.308 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.308 [2024-12-05 13:54:57.798920] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:26.308 [2024-12-05 13:54:57.799002] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269827 ] 00:23:26.566 [2024-12-05 13:54:57.866158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.566 [2024-12-05 13:54:57.921169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.824 [2024-12-05 13:54:58.103279] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.824 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.824 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:26.824 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.824 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:27.081 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.081 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:27.081 Running I/O for 1 seconds... 00:23:28.454 2999.00 IOPS, 11.71 MiB/s 00:23:28.454 Latency(us) 00:23:28.454 [2024-12-05T12:54:59.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.454 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:28.454 Verification LBA range: start 0x0 length 0x2000 00:23:28.454 nvme0n1 : 1.03 3027.93 11.83 0.00 0.00 41661.95 9272.13 28738.75 00:23:28.454 [2024-12-05T12:54:59.980Z] =================================================================================================================== 00:23:28.454 [2024-12-05T12:54:59.980Z] Total : 3027.93 11.83 0.00 0.00 41661.95 9272.13 28738.75 00:23:28.454 { 00:23:28.454 "results": [ 00:23:28.454 { 00:23:28.454 "job": "nvme0n1", 00:23:28.454 "core_mask": "0x2", 00:23:28.454 "workload": "verify", 00:23:28.454 "status": "finished", 00:23:28.454 "verify_range": { 00:23:28.454 "start": 0, 00:23:28.454 "length": 8192 00:23:28.454 }, 00:23:28.454 "queue_depth": 128, 00:23:28.454 "io_size": 4096, 00:23:28.454 "runtime": 1.032719, 00:23:28.454 "iops": 3027.9291849961123, 00:23:28.454 "mibps": 11.827848378891064, 00:23:28.454 "io_failed": 0, 00:23:28.454 "io_timeout": 0, 00:23:28.454 "avg_latency_us": 41661.94521408521, 00:23:28.454 "min_latency_us": 9272.13037037037, 00:23:28.454 "max_latency_us": 28738.74962962963 00:23:28.454 } 00:23:28.454 ], 00:23:28.454 "core_count": 1 00:23:28.454 } 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:28.454 nvmf_trace.0 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2269827 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2269827 ']' 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2269827 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2269827 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2269827' 00:23:28.454 killing process with pid 2269827 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2269827 00:23:28.454 Received shutdown signal, test time was about 1.000000 seconds 00:23:28.454 00:23:28.454 Latency(us) 00:23:28.454 [2024-12-05T12:54:59.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.454 [2024-12-05T12:54:59.980Z] =================================================================================================================== 00:23:28.454 [2024-12-05T12:54:59.980Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.454 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2269827 00:23:28.713 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:28.713 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.713 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:28.713 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.713 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:28.713 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.713 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.713 rmmod nvme_tcp 00:23:28.713 rmmod nvme_fabrics 00:23:28.713 rmmod nvme_keyring 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2269670 ']' 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2269670 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2269670 ']' 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2269670 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2269670 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2269670' 00:23:28.713 killing process with pid 2269670 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2269670 00:23:28.713 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2269670 00:23:28.972 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.972 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:28.972 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:28.972 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:28.972 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:28.972 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:28.972 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:28.972 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.972 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.972 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.972 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.972 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.880 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:30.880 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Dp5cFpVpXP /tmp/tmp.haZ6ePOZ46 /tmp/tmp.9mE94VFsKi 00:23:30.880 00:23:30.880 real 1m22.631s 00:23:30.880 user 2m17.345s 00:23:30.880 sys 0m25.378s 00:23:30.880 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.880 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.880 ************************************ 00:23:30.880 END TEST nvmf_tls 00:23:30.880 ************************************ 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:31.139 ************************************ 00:23:31.139 START TEST nvmf_fips 00:23:31.139 ************************************ 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:31.139 * Looking for test storage... 00:23:31.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:31.139 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:31.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.140 --rc genhtml_branch_coverage=1 00:23:31.140 --rc genhtml_function_coverage=1 00:23:31.140 --rc genhtml_legend=1 00:23:31.140 --rc geninfo_all_blocks=1 00:23:31.140 --rc geninfo_unexecuted_blocks=1 00:23:31.140 00:23:31.140 ' 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:31.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.140 --rc genhtml_branch_coverage=1 00:23:31.140 --rc genhtml_function_coverage=1 00:23:31.140 --rc genhtml_legend=1 00:23:31.140 --rc geninfo_all_blocks=1 00:23:31.140 --rc geninfo_unexecuted_blocks=1 00:23:31.140 00:23:31.140 ' 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:31.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.140 --rc genhtml_branch_coverage=1 00:23:31.140 --rc genhtml_function_coverage=1 00:23:31.140 --rc genhtml_legend=1 00:23:31.140 --rc geninfo_all_blocks=1 00:23:31.140 --rc geninfo_unexecuted_blocks=1 00:23:31.140 00:23:31.140 ' 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:31.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.140 --rc genhtml_branch_coverage=1 00:23:31.140 --rc genhtml_function_coverage=1 00:23:31.140 --rc genhtml_legend=1 00:23:31.140 --rc geninfo_all_blocks=1 00:23:31.140 --rc geninfo_unexecuted_blocks=1 00:23:31.140 00:23:31.140 ' 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:31.140 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:31.141 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:31.402 Error setting digest 00:23:31.402 4082CC95777F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:31.402 4082CC95777F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.402 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:33.933 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:33.933 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:33.933 Found net devices under 0000:09:00.0: cvl_0_0 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:33.933 Found net devices under 0000:09:00.1: cvl_0_1 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:33.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:23:33.933 00:23:33.933 --- 10.0.0.2 ping statistics --- 00:23:33.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.933 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:23:33.933 00:23:33.933 --- 10.0.0.1 ping statistics --- 00:23:33.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.933 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:33.933 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:33.933 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:33.933 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.933 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.933 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:33.933 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2272070 00:23:33.933 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:33.933 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2272070 00:23:33.933 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2272070 ']' 00:23:33.933 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.933 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.933 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:33.934 [2024-12-05 13:55:05.091699] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:33.934 [2024-12-05 13:55:05.091784] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.934 [2024-12-05 13:55:05.162854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.934 [2024-12-05 13:55:05.214376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.934 [2024-12-05 13:55:05.214430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.934 [2024-12-05 13:55:05.214459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.934 [2024-12-05 13:55:05.214471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.934 [2024-12-05 13:55:05.214480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.934 [2024-12-05 13:55:05.215039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.TWB 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.TWB 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.TWB 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.TWB 00:23:33.934 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:34.191 [2024-12-05 13:55:05.651620] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.191 [2024-12-05 13:55:05.667618] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.191 [2024-12-05 13:55:05.667867] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.191 malloc0 00:23:34.449 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.449 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2272212 00:23:34.449 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:34.449 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2272212 /var/tmp/bdevperf.sock 00:23:34.449 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2272212 ']' 00:23:34.449 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.449 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.449 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.449 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.449 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:34.449 [2024-12-05 13:55:05.805401] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:34.449 [2024-12-05 13:55:05.805512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272212 ] 00:23:34.449 [2024-12-05 13:55:05.871572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.449 [2024-12-05 13:55:05.929807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.706 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.706 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:34.706 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.TWB 00:23:34.965 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.257 [2024-12-05 13:55:06.551615] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.257 TLSTESTn1 00:23:35.257 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:35.257 Running I/O for 10 seconds... 00:23:37.588 3403.00 IOPS, 13.29 MiB/s [2024-12-05T12:55:10.049Z] 3418.00 IOPS, 13.35 MiB/s [2024-12-05T12:55:10.982Z] 3425.00 IOPS, 13.38 MiB/s [2024-12-05T12:55:11.914Z] 3429.00 IOPS, 13.39 MiB/s [2024-12-05T12:55:12.847Z] 3412.20 IOPS, 13.33 MiB/s [2024-12-05T12:55:13.779Z] 3425.00 IOPS, 13.38 MiB/s [2024-12-05T12:55:15.152Z] 3435.00 IOPS, 13.42 MiB/s [2024-12-05T12:55:16.082Z] 3442.62 IOPS, 13.45 MiB/s [2024-12-05T12:55:17.015Z] 3436.89 IOPS, 13.43 MiB/s [2024-12-05T12:55:17.015Z] 3437.20 IOPS, 13.43 MiB/s 00:23:45.489 Latency(us) 00:23:45.489 [2024-12-05T12:55:17.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.489 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:45.489 Verification LBA range: start 0x0 length 0x2000 00:23:45.489 TLSTESTn1 : 10.02 3442.90 13.45 0.00 0.00 37114.09 6699.24 50486.99 00:23:45.489 [2024-12-05T12:55:17.015Z] =================================================================================================================== 00:23:45.489 [2024-12-05T12:55:17.015Z] Total : 3442.90 13.45 0.00 0.00 37114.09 6699.24 50486.99 00:23:45.489 { 00:23:45.489 "results": [ 00:23:45.489 { 00:23:45.489 "job": "TLSTESTn1", 00:23:45.489 "core_mask": "0x4", 00:23:45.489 "workload": "verify", 00:23:45.489 "status": "finished", 00:23:45.489 "verify_range": { 00:23:45.489 "start": 0, 00:23:45.489 "length": 8192 00:23:45.489 }, 00:23:45.489 "queue_depth": 128, 00:23:45.489 "io_size": 4096, 00:23:45.489 "runtime": 10.020335, 00:23:45.489 "iops": 3442.898865157702, 00:23:45.489 "mibps": 13.448823692022273, 00:23:45.489 "io_failed": 0, 00:23:45.489 "io_timeout": 0, 00:23:45.489 "avg_latency_us": 37114.08864986962, 00:23:45.489 "min_latency_us": 6699.235555555556, 00:23:45.489 "max_latency_us": 50486.99259259259 00:23:45.489 } 00:23:45.489 ], 00:23:45.489 "core_count": 1 00:23:45.489 } 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:45.489 nvmf_trace.0 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2272212 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2272212 ']' 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2272212 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2272212 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2272212' 00:23:45.489 killing process with pid 2272212 00:23:45.489 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2272212 00:23:45.489 Received shutdown signal, test time was about 10.000000 seconds 00:23:45.490 00:23:45.490 Latency(us) 00:23:45.490 [2024-12-05T12:55:17.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.490 [2024-12-05T12:55:17.016Z] =================================================================================================================== 00:23:45.490 [2024-12-05T12:55:17.016Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.490 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2272212 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:45.748 rmmod nvme_tcp 00:23:45.748 rmmod nvme_fabrics 00:23:45.748 rmmod nvme_keyring 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2272070 ']' 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2272070 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2272070 ']' 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2272070 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2272070 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2272070' 00:23:45.748 killing process with pid 2272070 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2272070 00:23:45.748 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2272070 00:23:46.007 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:46.007 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:46.007 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:46.007 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:46.007 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:23:46.007 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:46.007 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:23:46.007 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:46.007 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:46.007 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.007 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.007 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.TWB 00:23:48.543 00:23:48.543 real 0m17.102s 00:23:48.543 user 0m22.165s 00:23:48.543 sys 0m5.690s 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:48.543 ************************************ 00:23:48.543 END TEST nvmf_fips 00:23:48.543 ************************************ 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:48.543 ************************************ 00:23:48.543 START TEST nvmf_control_msg_list 00:23:48.543 ************************************ 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:48.543 * Looking for test storage... 00:23:48.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:48.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.543 --rc genhtml_branch_coverage=1 00:23:48.543 --rc genhtml_function_coverage=1 00:23:48.543 --rc genhtml_legend=1 00:23:48.543 --rc geninfo_all_blocks=1 00:23:48.543 --rc geninfo_unexecuted_blocks=1 00:23:48.543 00:23:48.543 ' 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:48.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.543 --rc genhtml_branch_coverage=1 00:23:48.543 --rc genhtml_function_coverage=1 00:23:48.543 --rc genhtml_legend=1 00:23:48.543 --rc geninfo_all_blocks=1 00:23:48.543 --rc geninfo_unexecuted_blocks=1 00:23:48.543 00:23:48.543 ' 00:23:48.543 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:48.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.543 --rc genhtml_branch_coverage=1 00:23:48.543 --rc genhtml_function_coverage=1 00:23:48.544 --rc genhtml_legend=1 00:23:48.544 --rc geninfo_all_blocks=1 00:23:48.544 --rc geninfo_unexecuted_blocks=1 00:23:48.544 00:23:48.544 ' 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:48.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.544 --rc genhtml_branch_coverage=1 00:23:48.544 --rc genhtml_function_coverage=1 00:23:48.544 --rc genhtml_legend=1 00:23:48.544 --rc geninfo_all_blocks=1 00:23:48.544 --rc geninfo_unexecuted_blocks=1 00:23:48.544 00:23:48.544 ' 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:48.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:48.544 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:50.444 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:50.444 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.444 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:50.445 Found net devices under 0000:09:00.0: cvl_0_0 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:50.445 Found net devices under 0000:09:00.1: cvl_0_1 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.445 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.703 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.703 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:50.703 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:50.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:23:50.703 00:23:50.703 --- 10.0.0.2 ping statistics --- 00:23:50.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.703 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:23:50.703 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:23:50.704 00:23:50.704 --- 10.0.0.1 ping statistics --- 00:23:50.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.704 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:23:50.704 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.704 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:23:50.704 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:50.704 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.704 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:50.704 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:50.704 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.704 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:50.704 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:50.704 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:50.704 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.704 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.704 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.704 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2275478 00:23:50.704 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:50.704 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2275478 00:23:50.704 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2275478 ']' 00:23:50.704 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.704 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.704 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.704 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.704 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.704 [2024-12-05 13:55:22.065443] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:50.704 [2024-12-05 13:55:22.065550] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.704 [2024-12-05 13:55:22.138686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.704 [2024-12-05 13:55:22.193342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.704 [2024-12-05 13:55:22.193397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.704 [2024-12-05 13:55:22.193433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.704 [2024-12-05 13:55:22.193445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.704 [2024-12-05 13:55:22.193464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.704 [2024-12-05 13:55:22.194151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.962 [2024-12-05 13:55:22.345834] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.962 Malloc0 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.962 [2024-12-05 13:55:22.385646] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2275512 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:50.962 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2275513 00:23:50.963 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:50.963 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2275514 00:23:50.963 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2275512 00:23:50.963 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:50.963 [2024-12-05 13:55:22.444116] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:50.963 [2024-12-05 13:55:22.454505] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:50.963 [2024-12-05 13:55:22.454747] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:52.335 Initializing NVMe Controllers 00:23:52.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:52.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:52.335 Initialization complete. Launching workers. 00:23:52.335 ======================================================== 00:23:52.335 Latency(us) 00:23:52.335 Device Information : IOPS MiB/s Average min max 00:23:52.335 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40915.34 40262.57 41815.17 00:23:52.335 ======================================================== 00:23:52.335 Total : 25.00 0.10 40915.34 40262.57 41815.17 00:23:52.335 00:23:52.335 Initializing NVMe Controllers 00:23:52.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:52.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:52.335 Initialization complete. Launching workers. 00:23:52.335 ======================================================== 00:23:52.335 Latency(us) 00:23:52.335 Device Information : IOPS MiB/s Average min max 00:23:52.335 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6004.00 23.45 166.13 154.69 311.94 00:23:52.335 ======================================================== 00:23:52.335 Total : 6004.00 23.45 166.13 154.69 311.94 00:23:52.335 00:23:52.335 Initializing NVMe Controllers 00:23:52.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:52.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:52.335 Initialization complete. Launching workers. 00:23:52.335 ======================================================== 00:23:52.336 Latency(us) 00:23:52.336 Device Information : IOPS MiB/s Average min max 00:23:52.336 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41306.35 40841.85 41928.12 00:23:52.336 ======================================================== 00:23:52.336 Total : 25.00 0.10 41306.35 40841.85 41928.12 00:23:52.336 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2275513 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2275514 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:52.336 rmmod nvme_tcp 00:23:52.336 rmmod nvme_fabrics 00:23:52.336 rmmod nvme_keyring 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2275478 ']' 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2275478 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2275478 ']' 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2275478 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2275478 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2275478' 00:23:52.336 killing process with pid 2275478 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2275478 00:23:52.336 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2275478 00:23:52.594 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:52.594 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:52.594 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:52.594 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:52.594 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:23:52.594 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:52.594 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:23:52.594 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.594 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.594 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.594 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.594 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.496 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.496 00:23:54.496 real 0m6.418s 00:23:54.496 user 0m5.691s 00:23:54.496 sys 0m2.691s 00:23:54.496 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.496 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:54.496 ************************************ 00:23:54.496 END TEST nvmf_control_msg_list 00:23:54.496 ************************************ 00:23:54.755 13:55:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:54.755 13:55:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:54.755 13:55:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.755 13:55:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:54.755 ************************************ 00:23:54.755 START TEST nvmf_wait_for_buf 00:23:54.755 ************************************ 00:23:54.755 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:54.755 * Looking for test storage... 00:23:54.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:54.755 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:54.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.756 --rc genhtml_branch_coverage=1 00:23:54.756 --rc genhtml_function_coverage=1 00:23:54.756 --rc genhtml_legend=1 00:23:54.756 --rc geninfo_all_blocks=1 00:23:54.756 --rc geninfo_unexecuted_blocks=1 00:23:54.756 00:23:54.756 ' 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:54.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.756 --rc genhtml_branch_coverage=1 00:23:54.756 --rc genhtml_function_coverage=1 00:23:54.756 --rc genhtml_legend=1 00:23:54.756 --rc geninfo_all_blocks=1 00:23:54.756 --rc geninfo_unexecuted_blocks=1 00:23:54.756 00:23:54.756 ' 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:54.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.756 --rc genhtml_branch_coverage=1 00:23:54.756 --rc genhtml_function_coverage=1 00:23:54.756 --rc genhtml_legend=1 00:23:54.756 --rc geninfo_all_blocks=1 00:23:54.756 --rc geninfo_unexecuted_blocks=1 00:23:54.756 00:23:54.756 ' 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:54.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.756 --rc genhtml_branch_coverage=1 00:23:54.756 --rc genhtml_function_coverage=1 00:23:54.756 --rc genhtml_legend=1 00:23:54.756 --rc geninfo_all_blocks=1 00:23:54.756 --rc geninfo_unexecuted_blocks=1 00:23:54.756 00:23:54.756 ' 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.756 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.757 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.757 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:54.757 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.757 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.757 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.757 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.757 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.757 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.757 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.757 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.757 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:54.757 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:54.757 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.757 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:57.288 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.288 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:57.289 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:57.289 Found net devices under 0000:09:00.0: cvl_0_0 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:57.289 Found net devices under 0000:09:00.1: cvl_0_1 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:57.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:23:57.289 00:23:57.289 --- 10.0.0.2 ping statistics --- 00:23:57.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.289 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:23:57.289 00:23:57.289 --- 10.0.0.1 ping statistics --- 00:23:57.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.289 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2277704 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2277704 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2277704 ']' 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.289 [2024-12-05 13:55:28.414996] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:57.289 [2024-12-05 13:55:28.415090] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.289 [2024-12-05 13:55:28.486542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.289 [2024-12-05 13:55:28.542077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.289 [2024-12-05 13:55:28.542132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.289 [2024-12-05 13:55:28.542161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.289 [2024-12-05 13:55:28.542172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.289 [2024-12-05 13:55:28.542182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.289 [2024-12-05 13:55:28.542831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.289 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 Malloc0 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 [2024-12-05 13:55:28.793897] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.290 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.548 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.548 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:57.548 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.548 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.548 [2024-12-05 13:55:28.818072] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.548 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.548 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:57.548 [2024-12-05 13:55:28.899560] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:58.920 Initializing NVMe Controllers 00:23:58.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:58.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:58.920 Initialization complete. Launching workers. 00:23:58.920 ======================================================== 00:23:58.920 Latency(us) 00:23:58.920 Device Information : IOPS MiB/s Average min max 00:23:58.920 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 39.86 4.98 103945.09 31926.88 191529.49 00:23:58.920 ======================================================== 00:23:58.920 Total : 39.86 4.98 103945.09 31926.88 191529.49 00:23:58.920 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=614 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 614 -eq 0 ]] 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:58.920 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:58.920 rmmod nvme_tcp 00:23:58.920 rmmod nvme_fabrics 00:23:58.920 rmmod nvme_keyring 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2277704 ']' 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2277704 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2277704 ']' 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2277704 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2277704 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2277704' 00:23:59.178 killing process with pid 2277704 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2277704 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2277704 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:59.178 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:23:59.434 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:59.435 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:59.435 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.435 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.435 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.373 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:01.373 00:24:01.373 real 0m6.700s 00:24:01.373 user 0m3.174s 00:24:01.373 sys 0m2.001s 00:24:01.373 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.373 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.373 ************************************ 00:24:01.373 END TEST nvmf_wait_for_buf 00:24:01.373 ************************************ 00:24:01.373 13:55:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:24:01.373 13:55:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:24:01.373 13:55:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:24:01.373 13:55:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:24:01.373 13:55:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:24:01.373 13:55:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.906 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:03.907 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:03.907 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:03.907 Found net devices under 0000:09:00.0: cvl_0_0 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:03.907 Found net devices under 0000:09:00.1: cvl_0_1 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:03.907 ************************************ 00:24:03.907 START TEST nvmf_perf_adq 00:24:03.907 ************************************ 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:03.907 * Looking for test storage... 00:24:03.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:24:03.907 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:03.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.907 --rc genhtml_branch_coverage=1 00:24:03.907 --rc genhtml_function_coverage=1 00:24:03.907 --rc genhtml_legend=1 00:24:03.907 --rc geninfo_all_blocks=1 00:24:03.907 --rc geninfo_unexecuted_blocks=1 00:24:03.907 00:24:03.907 ' 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:03.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.907 --rc genhtml_branch_coverage=1 00:24:03.907 --rc genhtml_function_coverage=1 00:24:03.907 --rc genhtml_legend=1 00:24:03.907 --rc geninfo_all_blocks=1 00:24:03.907 --rc geninfo_unexecuted_blocks=1 00:24:03.907 00:24:03.907 ' 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:03.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.907 --rc genhtml_branch_coverage=1 00:24:03.907 --rc genhtml_function_coverage=1 00:24:03.907 --rc genhtml_legend=1 00:24:03.907 --rc geninfo_all_blocks=1 00:24:03.907 --rc geninfo_unexecuted_blocks=1 00:24:03.907 00:24:03.907 ' 00:24:03.907 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:03.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.908 --rc genhtml_branch_coverage=1 00:24:03.908 --rc genhtml_function_coverage=1 00:24:03.908 --rc genhtml_legend=1 00:24:03.908 --rc geninfo_all_blocks=1 00:24:03.908 --rc geninfo_unexecuted_blocks=1 00:24:03.908 00:24:03.908 ' 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:03.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:03.908 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:05.811 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:05.811 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:05.811 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:05.812 Found net devices under 0000:09:00.0: cvl_0_0 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:05.812 Found net devices under 0000:09:00.1: cvl_0_1 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:05.812 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:06.432 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:08.359 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:13.652 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:24:13.652 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:13.652 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:13.653 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:13.653 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:13.653 Found net devices under 0000:09:00.0: cvl_0_0 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:13.653 Found net devices under 0000:09:00.1: cvl_0_1 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:13.653 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:13.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:24:13.654 00:24:13.654 --- 10.0.0.2 ping statistics --- 00:24:13.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.654 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:24:13.654 00:24:13.654 --- 10.0.0.1 ping statistics --- 00:24:13.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.654 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2282422 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2282422 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2282422 ']' 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.654 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:13.654 [2024-12-05 13:55:44.977915] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:24:13.654 [2024-12-05 13:55:44.978004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.654 [2024-12-05 13:55:45.049349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:13.654 [2024-12-05 13:55:45.107026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.654 [2024-12-05 13:55:45.107077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.654 [2024-12-05 13:55:45.107106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.654 [2024-12-05 13:55:45.107117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.654 [2024-12-05 13:55:45.107126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.654 [2024-12-05 13:55:45.108820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.654 [2024-12-05 13:55:45.108887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.654 [2024-12-05 13:55:45.108953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.654 [2024-12-05 13:55:45.108957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:13.912 [2024-12-05 13:55:45.382164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:13.912 Malloc1 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.912 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:14.170 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.170 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.170 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.170 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:14.170 [2024-12-05 13:55:45.444660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.170 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.170 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2282571 00:24:14.170 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:24:14.170 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:16.068 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:24:16.068 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.068 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:16.068 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.068 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:24:16.068 "tick_rate": 2700000000, 00:24:16.068 "poll_groups": [ 00:24:16.068 { 00:24:16.068 "name": "nvmf_tgt_poll_group_000", 00:24:16.068 "admin_qpairs": 1, 00:24:16.068 "io_qpairs": 1, 00:24:16.068 "current_admin_qpairs": 1, 00:24:16.068 "current_io_qpairs": 1, 00:24:16.068 "pending_bdev_io": 0, 00:24:16.068 "completed_nvme_io": 20153, 00:24:16.068 "transports": [ 00:24:16.068 { 00:24:16.068 "trtype": "TCP" 00:24:16.068 } 00:24:16.068 ] 00:24:16.068 }, 00:24:16.068 { 00:24:16.068 "name": "nvmf_tgt_poll_group_001", 00:24:16.068 "admin_qpairs": 0, 00:24:16.068 "io_qpairs": 1, 00:24:16.068 "current_admin_qpairs": 0, 00:24:16.068 "current_io_qpairs": 1, 00:24:16.068 "pending_bdev_io": 0, 00:24:16.068 "completed_nvme_io": 20330, 00:24:16.068 "transports": [ 00:24:16.068 { 00:24:16.068 "trtype": "TCP" 00:24:16.068 } 00:24:16.068 ] 00:24:16.068 }, 00:24:16.068 { 00:24:16.068 "name": "nvmf_tgt_poll_group_002", 00:24:16.068 "admin_qpairs": 0, 00:24:16.068 "io_qpairs": 1, 00:24:16.068 "current_admin_qpairs": 0, 00:24:16.068 "current_io_qpairs": 1, 00:24:16.068 "pending_bdev_io": 0, 00:24:16.068 "completed_nvme_io": 19775, 00:24:16.068 "transports": [ 00:24:16.068 { 00:24:16.068 "trtype": "TCP" 00:24:16.068 } 00:24:16.068 ] 00:24:16.068 }, 00:24:16.068 { 00:24:16.068 "name": "nvmf_tgt_poll_group_003", 00:24:16.068 "admin_qpairs": 0, 00:24:16.068 "io_qpairs": 1, 00:24:16.068 "current_admin_qpairs": 0, 00:24:16.068 "current_io_qpairs": 1, 00:24:16.068 "pending_bdev_io": 0, 00:24:16.068 "completed_nvme_io": 19526, 00:24:16.068 "transports": [ 00:24:16.068 { 00:24:16.068 "trtype": "TCP" 00:24:16.068 } 00:24:16.068 ] 00:24:16.068 } 00:24:16.068 ] 00:24:16.068 }' 00:24:16.068 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:16.068 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:24:16.068 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:24:16.068 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:24:16.068 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2282571 00:24:24.197 Initializing NVMe Controllers 00:24:24.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:24.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:24.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:24.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:24.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:24.197 Initialization complete. Launching workers. 00:24:24.197 ======================================================== 00:24:24.197 Latency(us) 00:24:24.197 Device Information : IOPS MiB/s Average min max 00:24:24.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10297.40 40.22 6216.14 2591.09 10665.99 00:24:24.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10598.50 41.40 6039.43 2079.51 10088.44 00:24:24.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10448.30 40.81 6125.02 2481.20 10309.28 00:24:24.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10557.30 41.24 6062.52 2120.52 10231.67 00:24:24.197 ======================================================== 00:24:24.197 Total : 41901.49 163.68 6110.02 2079.51 10665.99 00:24:24.197 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:24.197 rmmod nvme_tcp 00:24:24.197 rmmod nvme_fabrics 00:24:24.197 rmmod nvme_keyring 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2282422 ']' 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2282422 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2282422 ']' 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2282422 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2282422 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2282422' 00:24:24.197 killing process with pid 2282422 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2282422 00:24:24.197 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2282422 00:24:24.457 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:24.457 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:24.457 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:24.457 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:24.457 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:24:24.457 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:24.457 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:24:24.457 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:24.457 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:24.457 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.457 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.457 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.994 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:26.994 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:24:26.994 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:26.994 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:27.253 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:29.152 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:34.430 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:34.431 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:34.431 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:34.431 Found net devices under 0000:09:00.0: cvl_0_0 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:34.431 Found net devices under 0000:09:00.1: cvl_0_1 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:34.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:24:34.431 00:24:34.431 --- 10.0.0.2 ping statistics --- 00:24:34.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.431 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:24:34.431 00:24:34.431 --- 10.0.0.1 ping statistics --- 00:24:34.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.431 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:24:34.431 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:34.432 net.core.busy_poll = 1 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:34.432 net.core.busy_read = 1 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2285193 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2285193 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2285193 ']' 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.432 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.432 [2024-12-05 13:56:05.853485] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:24:34.432 [2024-12-05 13:56:05.853574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.432 [2024-12-05 13:56:05.925903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:34.691 [2024-12-05 13:56:05.982236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.691 [2024-12-05 13:56:05.982283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.691 [2024-12-05 13:56:05.982312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.692 [2024-12-05 13:56:05.982324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.692 [2024-12-05 13:56:05.982334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.692 [2024-12-05 13:56:05.983839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.692 [2024-12-05 13:56:05.983895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.692 [2024-12-05 13:56:05.983959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.692 [2024-12-05 13:56:05.983963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.692 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.951 [2024-12-05 13:56:06.268267] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.951 Malloc1 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.951 [2024-12-05 13:56:06.329745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.951 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2285220 00:24:34.952 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:24:34.952 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:36.854 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:24:36.854 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.854 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:36.854 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.854 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:24:36.854 "tick_rate": 2700000000, 00:24:36.854 "poll_groups": [ 00:24:36.854 { 00:24:36.854 "name": "nvmf_tgt_poll_group_000", 00:24:36.854 "admin_qpairs": 1, 00:24:36.854 "io_qpairs": 2, 00:24:36.854 "current_admin_qpairs": 1, 00:24:36.854 "current_io_qpairs": 2, 00:24:36.854 "pending_bdev_io": 0, 00:24:36.854 "completed_nvme_io": 25972, 00:24:36.854 "transports": [ 00:24:36.854 { 00:24:36.854 "trtype": "TCP" 00:24:36.854 } 00:24:36.854 ] 00:24:36.854 }, 00:24:36.854 { 00:24:36.854 "name": "nvmf_tgt_poll_group_001", 00:24:36.854 "admin_qpairs": 0, 00:24:36.854 "io_qpairs": 2, 00:24:36.854 "current_admin_qpairs": 0, 00:24:36.854 "current_io_qpairs": 2, 00:24:36.854 "pending_bdev_io": 0, 00:24:36.854 "completed_nvme_io": 25176, 00:24:36.854 "transports": [ 00:24:36.854 { 00:24:36.854 "trtype": "TCP" 00:24:36.854 } 00:24:36.854 ] 00:24:36.854 }, 00:24:36.854 { 00:24:36.854 "name": "nvmf_tgt_poll_group_002", 00:24:36.854 "admin_qpairs": 0, 00:24:36.854 "io_qpairs": 0, 00:24:36.854 "current_admin_qpairs": 0, 00:24:36.854 "current_io_qpairs": 0, 00:24:36.854 "pending_bdev_io": 0, 00:24:36.854 "completed_nvme_io": 0, 00:24:36.854 "transports": [ 00:24:36.854 { 00:24:36.854 "trtype": "TCP" 00:24:36.854 } 00:24:36.854 ] 00:24:36.854 }, 00:24:36.854 { 00:24:36.854 "name": "nvmf_tgt_poll_group_003", 00:24:36.854 "admin_qpairs": 0, 00:24:36.854 "io_qpairs": 0, 00:24:36.854 "current_admin_qpairs": 0, 00:24:36.854 "current_io_qpairs": 0, 00:24:36.854 "pending_bdev_io": 0, 00:24:36.854 "completed_nvme_io": 0, 00:24:36.854 "transports": [ 00:24:36.854 { 00:24:36.854 "trtype": "TCP" 00:24:36.854 } 00:24:36.854 ] 00:24:36.854 } 00:24:36.854 ] 00:24:36.854 }' 00:24:36.854 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:36.854 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:24:37.111 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:24:37.111 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:24:37.111 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2285220 00:24:45.223 Initializing NVMe Controllers 00:24:45.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:45.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:45.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:45.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:45.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:45.223 Initialization complete. Launching workers. 00:24:45.223 ======================================================== 00:24:45.223 Latency(us) 00:24:45.223 Device Information : IOPS MiB/s Average min max 00:24:45.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5836.40 22.80 10968.37 1647.32 55011.83 00:24:45.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7649.40 29.88 8370.15 2160.30 54261.57 00:24:45.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6302.20 24.62 10157.91 2019.37 54731.09 00:24:45.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7255.10 28.34 8823.72 1661.29 55226.22 00:24:45.223 ======================================================== 00:24:45.223 Total : 27043.09 105.64 9469.20 1647.32 55226.22 00:24:45.223 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:45.223 rmmod nvme_tcp 00:24:45.223 rmmod nvme_fabrics 00:24:45.223 rmmod nvme_keyring 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2285193 ']' 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2285193 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2285193 ']' 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2285193 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2285193 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2285193' 00:24:45.223 killing process with pid 2285193 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2285193 00:24:45.223 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2285193 00:24:45.483 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:45.483 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:45.483 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:45.483 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:45.483 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:24:45.483 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:45.483 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:24:45.483 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:45.483 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:45.483 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.483 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.483 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.802 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.802 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:24:48.802 00:24:48.802 real 0m45.027s 00:24:48.802 user 2m40.199s 00:24:48.802 sys 0m9.210s 00:24:48.802 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:48.802 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:48.802 ************************************ 00:24:48.802 END TEST nvmf_perf_adq 00:24:48.802 ************************************ 00:24:48.802 13:56:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:48.802 13:56:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:48.802 13:56:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.802 13:56:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:48.802 ************************************ 00:24:48.802 START TEST nvmf_shutdown 00:24:48.802 ************************************ 00:24:48.802 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:48.802 * Looking for test storage... 00:24:48.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:48.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.802 --rc genhtml_branch_coverage=1 00:24:48.802 --rc genhtml_function_coverage=1 00:24:48.802 --rc genhtml_legend=1 00:24:48.802 --rc geninfo_all_blocks=1 00:24:48.802 --rc geninfo_unexecuted_blocks=1 00:24:48.802 00:24:48.802 ' 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:48.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.802 --rc genhtml_branch_coverage=1 00:24:48.802 --rc genhtml_function_coverage=1 00:24:48.802 --rc genhtml_legend=1 00:24:48.802 --rc geninfo_all_blocks=1 00:24:48.802 --rc geninfo_unexecuted_blocks=1 00:24:48.802 00:24:48.802 ' 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:48.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.802 --rc genhtml_branch_coverage=1 00:24:48.802 --rc genhtml_function_coverage=1 00:24:48.802 --rc genhtml_legend=1 00:24:48.802 --rc geninfo_all_blocks=1 00:24:48.802 --rc geninfo_unexecuted_blocks=1 00:24:48.802 00:24:48.802 ' 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:48.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.802 --rc genhtml_branch_coverage=1 00:24:48.802 --rc genhtml_function_coverage=1 00:24:48.802 --rc genhtml_legend=1 00:24:48.802 --rc geninfo_all_blocks=1 00:24:48.802 --rc geninfo_unexecuted_blocks=1 00:24:48.802 00:24:48.802 ' 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:48.802 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:48.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:48.803 ************************************ 00:24:48.803 START TEST nvmf_shutdown_tc1 00:24:48.803 ************************************ 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:48.803 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.337 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.337 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:51.337 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:51.337 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:51.337 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:51.338 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:51.338 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:51.338 Found net devices under 0000:09:00.0: cvl_0_0 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:51.338 Found net devices under 0000:09:00.1: cvl_0_1 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:51.338 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:51.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:24:51.339 00:24:51.339 --- 10.0.0.2 ping statistics --- 00:24:51.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.339 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:24:51.339 00:24:51.339 --- 10.0.0.1 ping statistics --- 00:24:51.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.339 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2288526 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2288526 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2288526 ']' 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.339 [2024-12-05 13:56:22.608846] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:24:51.339 [2024-12-05 13:56:22.608921] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.339 [2024-12-05 13:56:22.681809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:51.339 [2024-12-05 13:56:22.737723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.339 [2024-12-05 13:56:22.737777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.339 [2024-12-05 13:56:22.737805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.339 [2024-12-05 13:56:22.737816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.339 [2024-12-05 13:56:22.737826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.339 [2024-12-05 13:56:22.739399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.339 [2024-12-05 13:56:22.739531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.339 [2024-12-05 13:56:22.739558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:51.339 [2024-12-05 13:56:22.739562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:51.339 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.598 [2024-12-05 13:56:22.881937] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.598 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.598 Malloc1 00:24:51.598 [2024-12-05 13:56:22.972596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.598 Malloc2 00:24:51.598 Malloc3 00:24:51.598 Malloc4 00:24:51.907 Malloc5 00:24:51.907 Malloc6 00:24:51.907 Malloc7 00:24:51.907 Malloc8 00:24:51.907 Malloc9 00:24:51.907 Malloc10 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2288707 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2288707 /var/tmp/bdevperf.sock 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2288707 ']' 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:52.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:52.191 { 00:24:52.191 "params": { 00:24:52.191 "name": "Nvme$subsystem", 00:24:52.191 "trtype": "$TEST_TRANSPORT", 00:24:52.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.191 "adrfam": "ipv4", 00:24:52.191 "trsvcid": "$NVMF_PORT", 00:24:52.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.191 "hdgst": ${hdgst:-false}, 00:24:52.191 "ddgst": ${ddgst:-false} 00:24:52.191 }, 00:24:52.191 "method": "bdev_nvme_attach_controller" 00:24:52.191 } 00:24:52.191 EOF 00:24:52.191 )") 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:52.191 { 00:24:52.191 "params": { 00:24:52.191 "name": "Nvme$subsystem", 00:24:52.191 "trtype": "$TEST_TRANSPORT", 00:24:52.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.191 "adrfam": "ipv4", 00:24:52.191 "trsvcid": "$NVMF_PORT", 00:24:52.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.191 "hdgst": ${hdgst:-false}, 00:24:52.191 "ddgst": ${ddgst:-false} 00:24:52.191 }, 00:24:52.191 "method": "bdev_nvme_attach_controller" 00:24:52.191 } 00:24:52.191 EOF 00:24:52.191 )") 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:52.191 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:52.191 { 00:24:52.191 "params": { 00:24:52.192 "name": "Nvme$subsystem", 00:24:52.192 "trtype": "$TEST_TRANSPORT", 00:24:52.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.192 "adrfam": "ipv4", 00:24:52.192 "trsvcid": "$NVMF_PORT", 00:24:52.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.192 "hdgst": ${hdgst:-false}, 00:24:52.192 "ddgst": ${ddgst:-false} 00:24:52.192 }, 00:24:52.192 "method": "bdev_nvme_attach_controller" 00:24:52.192 } 00:24:52.192 EOF 00:24:52.192 )") 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:52.192 { 00:24:52.192 "params": { 00:24:52.192 "name": "Nvme$subsystem", 00:24:52.192 "trtype": "$TEST_TRANSPORT", 00:24:52.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.192 "adrfam": "ipv4", 00:24:52.192 "trsvcid": "$NVMF_PORT", 00:24:52.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.192 "hdgst": ${hdgst:-false}, 00:24:52.192 "ddgst": ${ddgst:-false} 00:24:52.192 }, 00:24:52.192 "method": "bdev_nvme_attach_controller" 00:24:52.192 } 00:24:52.192 EOF 00:24:52.192 )") 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:52.192 { 00:24:52.192 "params": { 00:24:52.192 "name": "Nvme$subsystem", 00:24:52.192 "trtype": "$TEST_TRANSPORT", 00:24:52.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.192 "adrfam": "ipv4", 00:24:52.192 "trsvcid": "$NVMF_PORT", 00:24:52.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.192 "hdgst": ${hdgst:-false}, 00:24:52.192 "ddgst": ${ddgst:-false} 00:24:52.192 }, 00:24:52.192 "method": "bdev_nvme_attach_controller" 00:24:52.192 } 00:24:52.192 EOF 00:24:52.192 )") 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:52.192 { 00:24:52.192 "params": { 00:24:52.192 "name": "Nvme$subsystem", 00:24:52.192 "trtype": "$TEST_TRANSPORT", 00:24:52.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.192 "adrfam": "ipv4", 00:24:52.192 "trsvcid": "$NVMF_PORT", 00:24:52.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.192 "hdgst": ${hdgst:-false}, 00:24:52.192 "ddgst": ${ddgst:-false} 00:24:52.192 }, 00:24:52.192 "method": "bdev_nvme_attach_controller" 00:24:52.192 } 00:24:52.192 EOF 00:24:52.192 )") 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:52.192 { 00:24:52.192 "params": { 00:24:52.192 "name": "Nvme$subsystem", 00:24:52.192 "trtype": "$TEST_TRANSPORT", 00:24:52.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.192 "adrfam": "ipv4", 00:24:52.192 "trsvcid": "$NVMF_PORT", 00:24:52.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.192 "hdgst": ${hdgst:-false}, 00:24:52.192 "ddgst": ${ddgst:-false} 00:24:52.192 }, 00:24:52.192 "method": "bdev_nvme_attach_controller" 00:24:52.192 } 00:24:52.192 EOF 00:24:52.192 )") 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:52.192 { 00:24:52.192 "params": { 00:24:52.192 "name": "Nvme$subsystem", 00:24:52.192 "trtype": "$TEST_TRANSPORT", 00:24:52.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.192 "adrfam": "ipv4", 00:24:52.192 "trsvcid": "$NVMF_PORT", 00:24:52.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.192 "hdgst": ${hdgst:-false}, 00:24:52.192 "ddgst": ${ddgst:-false} 00:24:52.192 }, 00:24:52.192 "method": "bdev_nvme_attach_controller" 00:24:52.192 } 00:24:52.192 EOF 00:24:52.192 )") 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:52.192 { 00:24:52.192 "params": { 00:24:52.192 "name": "Nvme$subsystem", 00:24:52.192 "trtype": "$TEST_TRANSPORT", 00:24:52.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.192 "adrfam": "ipv4", 00:24:52.192 "trsvcid": "$NVMF_PORT", 00:24:52.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.192 "hdgst": ${hdgst:-false}, 00:24:52.192 "ddgst": ${ddgst:-false} 00:24:52.192 }, 00:24:52.192 "method": "bdev_nvme_attach_controller" 00:24:52.192 } 00:24:52.192 EOF 00:24:52.192 )") 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:52.192 { 00:24:52.192 "params": { 00:24:52.192 "name": "Nvme$subsystem", 00:24:52.192 "trtype": "$TEST_TRANSPORT", 00:24:52.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.192 "adrfam": "ipv4", 00:24:52.192 "trsvcid": "$NVMF_PORT", 00:24:52.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.192 "hdgst": ${hdgst:-false}, 00:24:52.192 "ddgst": ${ddgst:-false} 00:24:52.192 }, 00:24:52.192 "method": "bdev_nvme_attach_controller" 00:24:52.192 } 00:24:52.192 EOF 00:24:52.192 )") 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:52.192 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:52.192 "params": { 00:24:52.192 "name": "Nvme1", 00:24:52.192 "trtype": "tcp", 00:24:52.192 "traddr": "10.0.0.2", 00:24:52.192 "adrfam": "ipv4", 00:24:52.192 "trsvcid": "4420", 00:24:52.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:52.192 "hdgst": false, 00:24:52.192 "ddgst": false 00:24:52.192 }, 00:24:52.192 "method": "bdev_nvme_attach_controller" 00:24:52.192 },{ 00:24:52.192 "params": { 00:24:52.192 "name": "Nvme2", 00:24:52.192 "trtype": "tcp", 00:24:52.192 "traddr": "10.0.0.2", 00:24:52.192 "adrfam": "ipv4", 00:24:52.192 "trsvcid": "4420", 00:24:52.192 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:52.192 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:52.192 "hdgst": false, 00:24:52.192 "ddgst": false 00:24:52.192 }, 00:24:52.192 "method": "bdev_nvme_attach_controller" 00:24:52.192 },{ 00:24:52.192 "params": { 00:24:52.192 "name": "Nvme3", 00:24:52.192 "trtype": "tcp", 00:24:52.192 "traddr": "10.0.0.2", 00:24:52.192 "adrfam": "ipv4", 00:24:52.192 "trsvcid": "4420", 00:24:52.192 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:52.192 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:52.192 "hdgst": false, 00:24:52.192 "ddgst": false 00:24:52.192 }, 00:24:52.192 "method": "bdev_nvme_attach_controller" 00:24:52.192 },{ 00:24:52.192 "params": { 00:24:52.192 "name": "Nvme4", 00:24:52.192 "trtype": "tcp", 00:24:52.192 "traddr": "10.0.0.2", 00:24:52.192 "adrfam": "ipv4", 00:24:52.192 "trsvcid": "4420", 00:24:52.192 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:52.192 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:52.192 "hdgst": false, 00:24:52.192 "ddgst": false 00:24:52.192 }, 00:24:52.192 "method": "bdev_nvme_attach_controller" 00:24:52.192 },{ 00:24:52.192 "params": { 00:24:52.192 "name": "Nvme5", 00:24:52.192 "trtype": "tcp", 00:24:52.192 "traddr": "10.0.0.2", 00:24:52.192 "adrfam": "ipv4", 00:24:52.192 "trsvcid": "4420", 00:24:52.192 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:52.192 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:52.192 "hdgst": false, 00:24:52.192 "ddgst": false 00:24:52.192 }, 00:24:52.192 "method": "bdev_nvme_attach_controller" 00:24:52.192 },{ 00:24:52.192 "params": { 00:24:52.192 "name": "Nvme6", 00:24:52.192 "trtype": "tcp", 00:24:52.192 "traddr": "10.0.0.2", 00:24:52.193 "adrfam": "ipv4", 00:24:52.193 "trsvcid": "4420", 00:24:52.193 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:52.193 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:52.193 "hdgst": false, 00:24:52.193 "ddgst": false 00:24:52.193 }, 00:24:52.193 "method": "bdev_nvme_attach_controller" 00:24:52.193 },{ 00:24:52.193 "params": { 00:24:52.193 "name": "Nvme7", 00:24:52.193 "trtype": "tcp", 00:24:52.193 "traddr": "10.0.0.2", 00:24:52.193 "adrfam": "ipv4", 00:24:52.193 "trsvcid": "4420", 00:24:52.193 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:52.193 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:52.193 "hdgst": false, 00:24:52.193 "ddgst": false 00:24:52.193 }, 00:24:52.193 "method": "bdev_nvme_attach_controller" 00:24:52.193 },{ 00:24:52.193 "params": { 00:24:52.193 "name": "Nvme8", 00:24:52.193 "trtype": "tcp", 00:24:52.193 "traddr": "10.0.0.2", 00:24:52.193 "adrfam": "ipv4", 00:24:52.193 "trsvcid": "4420", 00:24:52.193 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:52.193 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:52.193 "hdgst": false, 00:24:52.193 "ddgst": false 00:24:52.193 }, 00:24:52.193 "method": "bdev_nvme_attach_controller" 00:24:52.193 },{ 00:24:52.193 "params": { 00:24:52.193 "name": "Nvme9", 00:24:52.193 "trtype": "tcp", 00:24:52.193 "traddr": "10.0.0.2", 00:24:52.193 "adrfam": "ipv4", 00:24:52.193 "trsvcid": "4420", 00:24:52.193 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:52.193 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:52.193 "hdgst": false, 00:24:52.193 "ddgst": false 00:24:52.193 }, 00:24:52.193 "method": "bdev_nvme_attach_controller" 00:24:52.193 },{ 00:24:52.193 "params": { 00:24:52.193 "name": "Nvme10", 00:24:52.193 "trtype": "tcp", 00:24:52.193 "traddr": "10.0.0.2", 00:24:52.193 "adrfam": "ipv4", 00:24:52.193 "trsvcid": "4420", 00:24:52.193 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:52.193 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:52.193 "hdgst": false, 00:24:52.193 "ddgst": false 00:24:52.193 }, 00:24:52.193 "method": "bdev_nvme_attach_controller" 00:24:52.193 }' 00:24:52.193 [2024-12-05 13:56:23.483909] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:24:52.193 [2024-12-05 13:56:23.483986] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:52.193 [2024-12-05 13:56:23.555319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.193 [2024-12-05 13:56:23.612717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.092 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.092 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:54.092 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:54.092 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.092 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:54.092 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.092 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2288707 00:24:54.092 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:24:54.092 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:24:55.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2288707 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2288526 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.023 { 00:24:55.023 "params": { 00:24:55.023 "name": "Nvme$subsystem", 00:24:55.023 "trtype": "$TEST_TRANSPORT", 00:24:55.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.023 "adrfam": "ipv4", 00:24:55.023 "trsvcid": "$NVMF_PORT", 00:24:55.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.023 "hdgst": ${hdgst:-false}, 00:24:55.023 "ddgst": ${ddgst:-false} 00:24:55.023 }, 00:24:55.023 "method": "bdev_nvme_attach_controller" 00:24:55.023 } 00:24:55.023 EOF 00:24:55.023 )") 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.023 { 00:24:55.023 "params": { 00:24:55.023 "name": "Nvme$subsystem", 00:24:55.023 "trtype": "$TEST_TRANSPORT", 00:24:55.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.023 "adrfam": "ipv4", 00:24:55.023 "trsvcid": "$NVMF_PORT", 00:24:55.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.023 "hdgst": ${hdgst:-false}, 00:24:55.023 "ddgst": ${ddgst:-false} 00:24:55.023 }, 00:24:55.023 "method": "bdev_nvme_attach_controller" 00:24:55.023 } 00:24:55.023 EOF 00:24:55.023 )") 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.023 { 00:24:55.023 "params": { 00:24:55.023 "name": "Nvme$subsystem", 00:24:55.023 "trtype": "$TEST_TRANSPORT", 00:24:55.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.023 "adrfam": "ipv4", 00:24:55.023 "trsvcid": "$NVMF_PORT", 00:24:55.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.023 "hdgst": ${hdgst:-false}, 00:24:55.023 "ddgst": ${ddgst:-false} 00:24:55.023 }, 00:24:55.023 "method": "bdev_nvme_attach_controller" 00:24:55.023 } 00:24:55.023 EOF 00:24:55.023 )") 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.023 { 00:24:55.023 "params": { 00:24:55.023 "name": "Nvme$subsystem", 00:24:55.023 "trtype": "$TEST_TRANSPORT", 00:24:55.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.023 "adrfam": "ipv4", 00:24:55.023 "trsvcid": "$NVMF_PORT", 00:24:55.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.023 "hdgst": ${hdgst:-false}, 00:24:55.023 "ddgst": ${ddgst:-false} 00:24:55.023 }, 00:24:55.023 "method": "bdev_nvme_attach_controller" 00:24:55.023 } 00:24:55.023 EOF 00:24:55.023 )") 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.023 { 00:24:55.023 "params": { 00:24:55.023 "name": "Nvme$subsystem", 00:24:55.023 "trtype": "$TEST_TRANSPORT", 00:24:55.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.023 "adrfam": "ipv4", 00:24:55.023 "trsvcid": "$NVMF_PORT", 00:24:55.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.023 "hdgst": ${hdgst:-false}, 00:24:55.023 "ddgst": ${ddgst:-false} 00:24:55.023 }, 00:24:55.023 "method": "bdev_nvme_attach_controller" 00:24:55.023 } 00:24:55.023 EOF 00:24:55.023 )") 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.023 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.023 { 00:24:55.023 "params": { 00:24:55.023 "name": "Nvme$subsystem", 00:24:55.023 "trtype": "$TEST_TRANSPORT", 00:24:55.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.023 "adrfam": "ipv4", 00:24:55.023 "trsvcid": "$NVMF_PORT", 00:24:55.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.023 "hdgst": ${hdgst:-false}, 00:24:55.023 "ddgst": ${ddgst:-false} 00:24:55.023 }, 00:24:55.023 "method": "bdev_nvme_attach_controller" 00:24:55.023 } 00:24:55.023 EOF 00:24:55.023 )") 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.024 { 00:24:55.024 "params": { 00:24:55.024 "name": "Nvme$subsystem", 00:24:55.024 "trtype": "$TEST_TRANSPORT", 00:24:55.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.024 "adrfam": "ipv4", 00:24:55.024 "trsvcid": "$NVMF_PORT", 00:24:55.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.024 "hdgst": ${hdgst:-false}, 00:24:55.024 "ddgst": ${ddgst:-false} 00:24:55.024 }, 00:24:55.024 "method": "bdev_nvme_attach_controller" 00:24:55.024 } 00:24:55.024 EOF 00:24:55.024 )") 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.024 { 00:24:55.024 "params": { 00:24:55.024 "name": "Nvme$subsystem", 00:24:55.024 "trtype": "$TEST_TRANSPORT", 00:24:55.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.024 "adrfam": "ipv4", 00:24:55.024 "trsvcid": "$NVMF_PORT", 00:24:55.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.024 "hdgst": ${hdgst:-false}, 00:24:55.024 "ddgst": ${ddgst:-false} 00:24:55.024 }, 00:24:55.024 "method": "bdev_nvme_attach_controller" 00:24:55.024 } 00:24:55.024 EOF 00:24:55.024 )") 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.024 { 00:24:55.024 "params": { 00:24:55.024 "name": "Nvme$subsystem", 00:24:55.024 "trtype": "$TEST_TRANSPORT", 00:24:55.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.024 "adrfam": "ipv4", 00:24:55.024 "trsvcid": "$NVMF_PORT", 00:24:55.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.024 "hdgst": ${hdgst:-false}, 00:24:55.024 "ddgst": ${ddgst:-false} 00:24:55.024 }, 00:24:55.024 "method": "bdev_nvme_attach_controller" 00:24:55.024 } 00:24:55.024 EOF 00:24:55.024 )") 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.024 { 00:24:55.024 "params": { 00:24:55.024 "name": "Nvme$subsystem", 00:24:55.024 "trtype": "$TEST_TRANSPORT", 00:24:55.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.024 "adrfam": "ipv4", 00:24:55.024 "trsvcid": "$NVMF_PORT", 00:24:55.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.024 "hdgst": ${hdgst:-false}, 00:24:55.024 "ddgst": ${ddgst:-false} 00:24:55.024 }, 00:24:55.024 "method": "bdev_nvme_attach_controller" 00:24:55.024 } 00:24:55.024 EOF 00:24:55.024 )") 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:55.024 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:55.024 "params": { 00:24:55.024 "name": "Nvme1", 00:24:55.024 "trtype": "tcp", 00:24:55.024 "traddr": "10.0.0.2", 00:24:55.024 "adrfam": "ipv4", 00:24:55.024 "trsvcid": "4420", 00:24:55.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:55.024 "hdgst": false, 00:24:55.024 "ddgst": false 00:24:55.024 }, 00:24:55.024 "method": "bdev_nvme_attach_controller" 00:24:55.024 },{ 00:24:55.024 "params": { 00:24:55.024 "name": "Nvme2", 00:24:55.024 "trtype": "tcp", 00:24:55.024 "traddr": "10.0.0.2", 00:24:55.024 "adrfam": "ipv4", 00:24:55.024 "trsvcid": "4420", 00:24:55.024 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:55.024 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:55.024 "hdgst": false, 00:24:55.024 "ddgst": false 00:24:55.024 }, 00:24:55.024 "method": "bdev_nvme_attach_controller" 00:24:55.024 },{ 00:24:55.024 "params": { 00:24:55.024 "name": "Nvme3", 00:24:55.024 "trtype": "tcp", 00:24:55.024 "traddr": "10.0.0.2", 00:24:55.024 "adrfam": "ipv4", 00:24:55.024 "trsvcid": "4420", 00:24:55.024 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:55.024 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:55.024 "hdgst": false, 00:24:55.024 "ddgst": false 00:24:55.024 }, 00:24:55.024 "method": "bdev_nvme_attach_controller" 00:24:55.024 },{ 00:24:55.024 "params": { 00:24:55.024 "name": "Nvme4", 00:24:55.024 "trtype": "tcp", 00:24:55.024 "traddr": "10.0.0.2", 00:24:55.024 "adrfam": "ipv4", 00:24:55.024 "trsvcid": "4420", 00:24:55.024 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:55.024 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:55.024 "hdgst": false, 00:24:55.024 "ddgst": false 00:24:55.024 }, 00:24:55.024 "method": "bdev_nvme_attach_controller" 00:24:55.024 },{ 00:24:55.024 "params": { 00:24:55.024 "name": "Nvme5", 00:24:55.024 "trtype": "tcp", 00:24:55.024 "traddr": "10.0.0.2", 00:24:55.024 "adrfam": "ipv4", 00:24:55.024 "trsvcid": "4420", 00:24:55.024 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:55.024 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:55.024 "hdgst": false, 00:24:55.024 "ddgst": false 00:24:55.024 }, 00:24:55.024 "method": "bdev_nvme_attach_controller" 00:24:55.024 },{ 00:24:55.024 "params": { 00:24:55.024 "name": "Nvme6", 00:24:55.024 "trtype": "tcp", 00:24:55.024 "traddr": "10.0.0.2", 00:24:55.024 "adrfam": "ipv4", 00:24:55.024 "trsvcid": "4420", 00:24:55.024 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:55.024 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:55.024 "hdgst": false, 00:24:55.024 "ddgst": false 00:24:55.024 }, 00:24:55.024 "method": "bdev_nvme_attach_controller" 00:24:55.024 },{ 00:24:55.024 "params": { 00:24:55.024 "name": "Nvme7", 00:24:55.024 "trtype": "tcp", 00:24:55.024 "traddr": "10.0.0.2", 00:24:55.024 "adrfam": "ipv4", 00:24:55.024 "trsvcid": "4420", 00:24:55.024 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:55.024 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:55.024 "hdgst": false, 00:24:55.024 "ddgst": false 00:24:55.024 }, 00:24:55.024 "method": "bdev_nvme_attach_controller" 00:24:55.024 },{ 00:24:55.024 "params": { 00:24:55.024 "name": "Nvme8", 00:24:55.024 "trtype": "tcp", 00:24:55.024 "traddr": "10.0.0.2", 00:24:55.024 "adrfam": "ipv4", 00:24:55.024 "trsvcid": "4420", 00:24:55.024 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:55.024 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:55.024 "hdgst": false, 00:24:55.024 "ddgst": false 00:24:55.024 }, 00:24:55.024 "method": "bdev_nvme_attach_controller" 00:24:55.024 },{ 00:24:55.024 "params": { 00:24:55.024 "name": "Nvme9", 00:24:55.024 "trtype": "tcp", 00:24:55.024 "traddr": "10.0.0.2", 00:24:55.024 "adrfam": "ipv4", 00:24:55.024 "trsvcid": "4420", 00:24:55.024 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:55.024 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:55.024 "hdgst": false, 00:24:55.024 "ddgst": false 00:24:55.024 }, 00:24:55.024 "method": "bdev_nvme_attach_controller" 00:24:55.024 },{ 00:24:55.024 "params": { 00:24:55.024 "name": "Nvme10", 00:24:55.024 "trtype": "tcp", 00:24:55.024 "traddr": "10.0.0.2", 00:24:55.024 "adrfam": "ipv4", 00:24:55.025 "trsvcid": "4420", 00:24:55.025 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:55.025 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:55.025 "hdgst": false, 00:24:55.025 "ddgst": false 00:24:55.025 }, 00:24:55.025 "method": "bdev_nvme_attach_controller" 00:24:55.025 }' 00:24:55.281 [2024-12-05 13:56:26.548814] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:24:55.281 [2024-12-05 13:56:26.548895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289129 ] 00:24:55.281 [2024-12-05 13:56:26.623832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.281 [2024-12-05 13:56:26.681698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.649 Running I/O for 1 seconds... 00:24:57.836 1545.00 IOPS, 96.56 MiB/s 00:24:57.836 Latency(us) 00:24:57.836 [2024-12-05T12:56:29.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.836 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.836 Verification LBA range: start 0x0 length 0x400 00:24:57.836 Nvme1n1 : 1.11 173.65 10.85 0.00 0.00 364949.30 20486.07 304475.40 00:24:57.836 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.836 Verification LBA range: start 0x0 length 0x400 00:24:57.836 Nvme2n1 : 1.15 227.09 14.19 0.00 0.00 269436.01 21845.33 274959.93 00:24:57.836 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.836 Verification LBA range: start 0x0 length 0x400 00:24:57.836 Nvme3n1 : 1.16 219.98 13.75 0.00 0.00 279045.31 25049.32 301368.51 00:24:57.836 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.836 Verification LBA range: start 0x0 length 0x400 00:24:57.836 Nvme4n1 : 1.16 221.28 13.83 0.00 0.00 272679.63 23204.60 301368.51 00:24:57.836 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.836 Verification LBA range: start 0x0 length 0x400 00:24:57.836 Nvme5n1 : 1.14 168.33 10.52 0.00 0.00 352327.81 46603.38 313796.08 00:24:57.836 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.836 Verification LBA range: start 0x0 length 0x400 00:24:57.836 Nvme6n1 : 1.10 174.10 10.88 0.00 0.00 333677.35 19223.89 288940.94 00:24:57.836 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.836 Verification LBA range: start 0x0 length 0x400 00:24:57.836 Nvme7n1 : 1.18 217.62 13.60 0.00 0.00 263876.08 26991.12 320009.86 00:24:57.837 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.837 Verification LBA range: start 0x0 length 0x400 00:24:57.837 Nvme8n1 : 1.18 221.20 13.82 0.00 0.00 254927.76 4053.52 313796.08 00:24:57.837 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.837 Verification LBA range: start 0x0 length 0x400 00:24:57.837 Nvme9n1 : 1.17 219.23 13.70 0.00 0.00 253110.04 22622.06 281173.71 00:24:57.837 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.837 Verification LBA range: start 0x0 length 0x400 00:24:57.837 Nvme10n1 : 1.18 216.64 13.54 0.00 0.00 252197.36 15243.19 333990.87 00:24:57.837 [2024-12-05T12:56:29.363Z] =================================================================================================================== 00:24:57.837 [2024-12-05T12:56:29.363Z] Total : 2059.10 128.69 0.00 0.00 284619.20 4053.52 333990.87 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:58.093 rmmod nvme_tcp 00:24:58.093 rmmod nvme_fabrics 00:24:58.093 rmmod nvme_keyring 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2288526 ']' 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2288526 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2288526 ']' 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2288526 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2288526 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2288526' 00:24:58.093 killing process with pid 2288526 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2288526 00:24:58.093 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2288526 00:24:58.656 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:58.656 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:58.656 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:58.656 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:24:58.656 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:24:58.656 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:58.656 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:24:58.656 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:58.656 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:58.656 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.656 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.656 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:01.187 00:25:01.187 real 0m11.895s 00:25:01.187 user 0m34.016s 00:25:01.187 sys 0m3.324s 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:01.187 ************************************ 00:25:01.187 END TEST nvmf_shutdown_tc1 00:25:01.187 ************************************ 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:01.187 ************************************ 00:25:01.187 START TEST nvmf_shutdown_tc2 00:25:01.187 ************************************ 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:01.187 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:01.187 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:01.187 Found net devices under 0000:09:00.0: cvl_0_0 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:01.187 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:01.188 Found net devices under 0000:09:00.1: cvl_0_1 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:01.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:25:01.188 00:25:01.188 --- 10.0.0.2 ping statistics --- 00:25:01.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.188 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:25:01.188 00:25:01.188 --- 10.0.0.1 ping statistics --- 00:25:01.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.188 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2289895 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2289895 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2289895 ']' 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.188 [2024-12-05 13:56:32.369161] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:01.188 [2024-12-05 13:56:32.369241] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.188 [2024-12-05 13:56:32.447554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:01.188 [2024-12-05 13:56:32.505345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.188 [2024-12-05 13:56:32.505422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.188 [2024-12-05 13:56:32.505438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.188 [2024-12-05 13:56:32.505450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.188 [2024-12-05 13:56:32.505459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.188 [2024-12-05 13:56:32.507069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.188 [2024-12-05 13:56:32.507133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:01.188 [2024-12-05 13:56:32.507200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:01.188 [2024-12-05 13:56:32.507203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.188 [2024-12-05 13:56:32.667180] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:01.188 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:01.189 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:01.446 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:01.446 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.446 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.446 Malloc1 00:25:01.446 [2024-12-05 13:56:32.770640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.446 Malloc2 00:25:01.446 Malloc3 00:25:01.446 Malloc4 00:25:01.446 Malloc5 00:25:01.704 Malloc6 00:25:01.704 Malloc7 00:25:01.704 Malloc8 00:25:01.704 Malloc9 00:25:01.704 Malloc10 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2290072 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2290072 /var/tmp/bdevperf.sock 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2290072 ']' 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.704 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.704 { 00:25:01.704 "params": { 00:25:01.704 "name": "Nvme$subsystem", 00:25:01.704 "trtype": "$TEST_TRANSPORT", 00:25:01.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.704 "adrfam": "ipv4", 00:25:01.704 "trsvcid": "$NVMF_PORT", 00:25:01.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.704 "hdgst": ${hdgst:-false}, 00:25:01.704 "ddgst": ${ddgst:-false} 00:25:01.704 }, 00:25:01.704 "method": "bdev_nvme_attach_controller" 00:25:01.704 } 00:25:01.704 EOF 00:25:01.704 )") 00:25:01.963 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.964 { 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme$subsystem", 00:25:01.964 "trtype": "$TEST_TRANSPORT", 00:25:01.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "$NVMF_PORT", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.964 "hdgst": ${hdgst:-false}, 00:25:01.964 "ddgst": ${ddgst:-false} 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 } 00:25:01.964 EOF 00:25:01.964 )") 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.964 { 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme$subsystem", 00:25:01.964 "trtype": "$TEST_TRANSPORT", 00:25:01.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "$NVMF_PORT", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.964 "hdgst": ${hdgst:-false}, 00:25:01.964 "ddgst": ${ddgst:-false} 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 } 00:25:01.964 EOF 00:25:01.964 )") 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.964 { 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme$subsystem", 00:25:01.964 "trtype": "$TEST_TRANSPORT", 00:25:01.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "$NVMF_PORT", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.964 "hdgst": ${hdgst:-false}, 00:25:01.964 "ddgst": ${ddgst:-false} 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 } 00:25:01.964 EOF 00:25:01.964 )") 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.964 { 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme$subsystem", 00:25:01.964 "trtype": "$TEST_TRANSPORT", 00:25:01.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "$NVMF_PORT", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.964 "hdgst": ${hdgst:-false}, 00:25:01.964 "ddgst": ${ddgst:-false} 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 } 00:25:01.964 EOF 00:25:01.964 )") 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.964 { 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme$subsystem", 00:25:01.964 "trtype": "$TEST_TRANSPORT", 00:25:01.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "$NVMF_PORT", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.964 "hdgst": ${hdgst:-false}, 00:25:01.964 "ddgst": ${ddgst:-false} 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 } 00:25:01.964 EOF 00:25:01.964 )") 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.964 { 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme$subsystem", 00:25:01.964 "trtype": "$TEST_TRANSPORT", 00:25:01.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "$NVMF_PORT", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.964 "hdgst": ${hdgst:-false}, 00:25:01.964 "ddgst": ${ddgst:-false} 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 } 00:25:01.964 EOF 00:25:01.964 )") 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.964 { 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme$subsystem", 00:25:01.964 "trtype": "$TEST_TRANSPORT", 00:25:01.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "$NVMF_PORT", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.964 "hdgst": ${hdgst:-false}, 00:25:01.964 "ddgst": ${ddgst:-false} 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 } 00:25:01.964 EOF 00:25:01.964 )") 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.964 { 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme$subsystem", 00:25:01.964 "trtype": "$TEST_TRANSPORT", 00:25:01.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "$NVMF_PORT", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.964 "hdgst": ${hdgst:-false}, 00:25:01.964 "ddgst": ${ddgst:-false} 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 } 00:25:01.964 EOF 00:25:01.964 )") 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.964 { 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme$subsystem", 00:25:01.964 "trtype": "$TEST_TRANSPORT", 00:25:01.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "$NVMF_PORT", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.964 "hdgst": ${hdgst:-false}, 00:25:01.964 "ddgst": ${ddgst:-false} 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 } 00:25:01.964 EOF 00:25:01.964 )") 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:25:01.964 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme1", 00:25:01.964 "trtype": "tcp", 00:25:01.964 "traddr": "10.0.0.2", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "4420", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:01.964 "hdgst": false, 00:25:01.964 "ddgst": false 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 },{ 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme2", 00:25:01.964 "trtype": "tcp", 00:25:01.964 "traddr": "10.0.0.2", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "4420", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:01.964 "hdgst": false, 00:25:01.964 "ddgst": false 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 },{ 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme3", 00:25:01.964 "trtype": "tcp", 00:25:01.964 "traddr": "10.0.0.2", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "4420", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:01.964 "hdgst": false, 00:25:01.964 "ddgst": false 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 },{ 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme4", 00:25:01.964 "trtype": "tcp", 00:25:01.964 "traddr": "10.0.0.2", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "4420", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:01.964 "hdgst": false, 00:25:01.964 "ddgst": false 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 },{ 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme5", 00:25:01.964 "trtype": "tcp", 00:25:01.964 "traddr": "10.0.0.2", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "4420", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:01.964 "hdgst": false, 00:25:01.964 "ddgst": false 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 },{ 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme6", 00:25:01.964 "trtype": "tcp", 00:25:01.964 "traddr": "10.0.0.2", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "4420", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:01.964 "hdgst": false, 00:25:01.964 "ddgst": false 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 },{ 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme7", 00:25:01.964 "trtype": "tcp", 00:25:01.964 "traddr": "10.0.0.2", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "4420", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:01.964 "hdgst": false, 00:25:01.964 "ddgst": false 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 },{ 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme8", 00:25:01.964 "trtype": "tcp", 00:25:01.964 "traddr": "10.0.0.2", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "4420", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:01.964 "hdgst": false, 00:25:01.964 "ddgst": false 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 },{ 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme9", 00:25:01.964 "trtype": "tcp", 00:25:01.964 "traddr": "10.0.0.2", 00:25:01.964 "adrfam": "ipv4", 00:25:01.964 "trsvcid": "4420", 00:25:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:01.964 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:01.964 "hdgst": false, 00:25:01.964 "ddgst": false 00:25:01.964 }, 00:25:01.964 "method": "bdev_nvme_attach_controller" 00:25:01.964 },{ 00:25:01.964 "params": { 00:25:01.964 "name": "Nvme10", 00:25:01.964 "trtype": "tcp", 00:25:01.964 "traddr": "10.0.0.2", 00:25:01.964 "adrfam": "ipv4", 00:25:01.965 "trsvcid": "4420", 00:25:01.965 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:01.965 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:01.965 "hdgst": false, 00:25:01.965 "ddgst": false 00:25:01.965 }, 00:25:01.965 "method": "bdev_nvme_attach_controller" 00:25:01.965 }' 00:25:01.965 [2024-12-05 13:56:33.274215] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:01.965 [2024-12-05 13:56:33.274294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2290072 ] 00:25:01.965 [2024-12-05 13:56:33.347593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.965 [2024-12-05 13:56:33.405178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.858 Running I/O for 10 seconds... 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:03.858 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:04.115 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:04.115 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:04.115 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:04.115 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:04.115 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.115 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2290072 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2290072 ']' 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2290072 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2290072 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2290072' 00:25:04.372 killing process with pid 2290072 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2290072 00:25:04.372 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2290072 00:25:04.372 Received shutdown signal, test time was about 0.831214 seconds 00:25:04.372 00:25:04.372 Latency(us) 00:25:04.372 [2024-12-05T12:56:35.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.372 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.372 Verification LBA range: start 0x0 length 0x400 00:25:04.372 Nvme1n1 : 0.80 238.70 14.92 0.00 0.00 261896.72 19903.53 259425.47 00:25:04.372 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.372 Verification LBA range: start 0x0 length 0x400 00:25:04.372 Nvme2n1 : 0.81 237.90 14.87 0.00 0.00 259190.33 17961.72 228356.55 00:25:04.372 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.372 Verification LBA range: start 0x0 length 0x400 00:25:04.372 Nvme3n1 : 0.80 239.52 14.97 0.00 0.00 251187.71 23010.42 236123.78 00:25:04.372 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.372 Verification LBA range: start 0x0 length 0x400 00:25:04.372 Nvme4n1 : 0.79 242.64 15.16 0.00 0.00 241790.93 18252.99 259425.47 00:25:04.372 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.372 Verification LBA range: start 0x0 length 0x400 00:25:04.372 Nvme5n1 : 0.82 234.44 14.65 0.00 0.00 243846.51 36505.98 239230.67 00:25:04.372 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.372 Verification LBA range: start 0x0 length 0x400 00:25:04.372 Nvme6n1 : 0.82 235.28 14.70 0.00 0.00 238237.27 22622.06 276513.37 00:25:04.372 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.372 Verification LBA range: start 0x0 length 0x400 00:25:04.372 Nvme7n1 : 0.82 238.55 14.91 0.00 0.00 228314.94 3106.89 259425.47 00:25:04.372 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.372 Verification LBA range: start 0x0 length 0x400 00:25:04.372 Nvme8n1 : 0.83 232.37 14.52 0.00 0.00 229662.97 17864.63 260978.92 00:25:04.372 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.372 Verification LBA range: start 0x0 length 0x400 00:25:04.372 Nvme9n1 : 0.83 231.21 14.45 0.00 0.00 225346.50 20291.89 265639.25 00:25:04.372 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.372 Verification LBA range: start 0x0 length 0x400 00:25:04.372 Nvme10n1 : 0.78 163.66 10.23 0.00 0.00 305288.53 22233.69 292047.83 00:25:04.372 [2024-12-05T12:56:35.898Z] =================================================================================================================== 00:25:04.372 [2024-12-05T12:56:35.898Z] Total : 2294.28 143.39 0.00 0.00 246478.05 3106.89 292047.83 00:25:04.629 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:25:05.558 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2289895 00:25:05.558 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:25:05.558 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:05.558 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:05.558 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:05.558 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:05.558 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:05.558 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:25:05.558 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:05.558 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:25:05.558 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:05.558 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:05.558 rmmod nvme_tcp 00:25:05.558 rmmod nvme_fabrics 00:25:05.815 rmmod nvme_keyring 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2289895 ']' 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2289895 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2289895 ']' 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2289895 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2289895 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2289895' 00:25:05.815 killing process with pid 2289895 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2289895 00:25:05.815 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2289895 00:25:06.382 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:06.382 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:06.382 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:06.382 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:25:06.382 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:25:06.382 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:06.382 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:25:06.382 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.382 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.382 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.382 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.382 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.291 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:08.292 00:25:08.292 real 0m7.582s 00:25:08.292 user 0m22.968s 00:25:08.292 sys 0m1.424s 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:08.292 ************************************ 00:25:08.292 END TEST nvmf_shutdown_tc2 00:25:08.292 ************************************ 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:08.292 ************************************ 00:25:08.292 START TEST nvmf_shutdown_tc3 00:25:08.292 ************************************ 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:08.292 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:08.292 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.292 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:08.293 Found net devices under 0000:09:00.0: cvl_0_0 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:08.293 Found net devices under 0000:09:00.1: cvl_0_1 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.293 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:25:08.552 00:25:08.552 --- 10.0.0.2 ping statistics --- 00:25:08.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.552 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:25:08.552 00:25:08.552 --- 10.0.0.1 ping statistics --- 00:25:08.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.552 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2290951 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2290951 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2290951 ']' 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.552 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:08.552 [2024-12-05 13:56:40.018672] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:08.552 [2024-12-05 13:56:40.018791] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.811 [2024-12-05 13:56:40.098476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:08.811 [2024-12-05 13:56:40.162595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.811 [2024-12-05 13:56:40.162646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.811 [2024-12-05 13:56:40.162675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.811 [2024-12-05 13:56:40.162687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.811 [2024-12-05 13:56:40.162697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.811 [2024-12-05 13:56:40.164249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.811 [2024-12-05 13:56:40.164312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.811 [2024-12-05 13:56:40.164309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:08.811 [2024-12-05 13:56:40.164284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:08.811 [2024-12-05 13:56:40.315565] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:08.811 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:09.069 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.070 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:09.070 Malloc1 00:25:09.070 [2024-12-05 13:56:40.414104] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.070 Malloc2 00:25:09.070 Malloc3 00:25:09.070 Malloc4 00:25:09.070 Malloc5 00:25:09.328 Malloc6 00:25:09.328 Malloc7 00:25:09.328 Malloc8 00:25:09.328 Malloc9 00:25:09.328 Malloc10 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2291040 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2291040 /var/tmp/bdevperf.sock 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2291040 ']' 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:09.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.588 { 00:25:09.588 "params": { 00:25:09.588 "name": "Nvme$subsystem", 00:25:09.588 "trtype": "$TEST_TRANSPORT", 00:25:09.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.588 "adrfam": "ipv4", 00:25:09.588 "trsvcid": "$NVMF_PORT", 00:25:09.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.588 "hdgst": ${hdgst:-false}, 00:25:09.588 "ddgst": ${ddgst:-false} 00:25:09.588 }, 00:25:09.588 "method": "bdev_nvme_attach_controller" 00:25:09.588 } 00:25:09.588 EOF 00:25:09.588 )") 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.588 { 00:25:09.588 "params": { 00:25:09.588 "name": "Nvme$subsystem", 00:25:09.588 "trtype": "$TEST_TRANSPORT", 00:25:09.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.588 "adrfam": "ipv4", 00:25:09.588 "trsvcid": "$NVMF_PORT", 00:25:09.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.588 "hdgst": ${hdgst:-false}, 00:25:09.588 "ddgst": ${ddgst:-false} 00:25:09.588 }, 00:25:09.588 "method": "bdev_nvme_attach_controller" 00:25:09.588 } 00:25:09.588 EOF 00:25:09.588 )") 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.588 { 00:25:09.588 "params": { 00:25:09.588 "name": "Nvme$subsystem", 00:25:09.588 "trtype": "$TEST_TRANSPORT", 00:25:09.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.588 "adrfam": "ipv4", 00:25:09.588 "trsvcid": "$NVMF_PORT", 00:25:09.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.588 "hdgst": ${hdgst:-false}, 00:25:09.588 "ddgst": ${ddgst:-false} 00:25:09.588 }, 00:25:09.588 "method": "bdev_nvme_attach_controller" 00:25:09.588 } 00:25:09.588 EOF 00:25:09.588 )") 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.588 { 00:25:09.588 "params": { 00:25:09.588 "name": "Nvme$subsystem", 00:25:09.588 "trtype": "$TEST_TRANSPORT", 00:25:09.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.588 "adrfam": "ipv4", 00:25:09.588 "trsvcid": "$NVMF_PORT", 00:25:09.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.588 "hdgst": ${hdgst:-false}, 00:25:09.588 "ddgst": ${ddgst:-false} 00:25:09.588 }, 00:25:09.588 "method": "bdev_nvme_attach_controller" 00:25:09.588 } 00:25:09.588 EOF 00:25:09.588 )") 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.588 { 00:25:09.588 "params": { 00:25:09.588 "name": "Nvme$subsystem", 00:25:09.588 "trtype": "$TEST_TRANSPORT", 00:25:09.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.588 "adrfam": "ipv4", 00:25:09.588 "trsvcid": "$NVMF_PORT", 00:25:09.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.588 "hdgst": ${hdgst:-false}, 00:25:09.588 "ddgst": ${ddgst:-false} 00:25:09.588 }, 00:25:09.588 "method": "bdev_nvme_attach_controller" 00:25:09.588 } 00:25:09.588 EOF 00:25:09.588 )") 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.588 { 00:25:09.588 "params": { 00:25:09.588 "name": "Nvme$subsystem", 00:25:09.588 "trtype": "$TEST_TRANSPORT", 00:25:09.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.588 "adrfam": "ipv4", 00:25:09.588 "trsvcid": "$NVMF_PORT", 00:25:09.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.588 "hdgst": ${hdgst:-false}, 00:25:09.588 "ddgst": ${ddgst:-false} 00:25:09.588 }, 00:25:09.588 "method": "bdev_nvme_attach_controller" 00:25:09.588 } 00:25:09.588 EOF 00:25:09.588 )") 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.588 { 00:25:09.588 "params": { 00:25:09.588 "name": "Nvme$subsystem", 00:25:09.588 "trtype": "$TEST_TRANSPORT", 00:25:09.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.588 "adrfam": "ipv4", 00:25:09.588 "trsvcid": "$NVMF_PORT", 00:25:09.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.588 "hdgst": ${hdgst:-false}, 00:25:09.588 "ddgst": ${ddgst:-false} 00:25:09.588 }, 00:25:09.588 "method": "bdev_nvme_attach_controller" 00:25:09.588 } 00:25:09.588 EOF 00:25:09.588 )") 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.588 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.589 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.589 { 00:25:09.589 "params": { 00:25:09.589 "name": "Nvme$subsystem", 00:25:09.589 "trtype": "$TEST_TRANSPORT", 00:25:09.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.589 "adrfam": "ipv4", 00:25:09.589 "trsvcid": "$NVMF_PORT", 00:25:09.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.589 "hdgst": ${hdgst:-false}, 00:25:09.589 "ddgst": ${ddgst:-false} 00:25:09.589 }, 00:25:09.589 "method": "bdev_nvme_attach_controller" 00:25:09.589 } 00:25:09.589 EOF 00:25:09.589 )") 00:25:09.589 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.589 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.589 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.589 { 00:25:09.589 "params": { 00:25:09.589 "name": "Nvme$subsystem", 00:25:09.589 "trtype": "$TEST_TRANSPORT", 00:25:09.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.589 "adrfam": "ipv4", 00:25:09.589 "trsvcid": "$NVMF_PORT", 00:25:09.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.589 "hdgst": ${hdgst:-false}, 00:25:09.589 "ddgst": ${ddgst:-false} 00:25:09.589 }, 00:25:09.589 "method": "bdev_nvme_attach_controller" 00:25:09.589 } 00:25:09.589 EOF 00:25:09.589 )") 00:25:09.589 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.589 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.589 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.589 { 00:25:09.589 "params": { 00:25:09.589 "name": "Nvme$subsystem", 00:25:09.589 "trtype": "$TEST_TRANSPORT", 00:25:09.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.589 "adrfam": "ipv4", 00:25:09.589 "trsvcid": "$NVMF_PORT", 00:25:09.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.589 "hdgst": ${hdgst:-false}, 00:25:09.589 "ddgst": ${ddgst:-false} 00:25:09.589 }, 00:25:09.589 "method": "bdev_nvme_attach_controller" 00:25:09.589 } 00:25:09.589 EOF 00:25:09.589 )") 00:25:09.589 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.589 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:25:09.589 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:25:09.589 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:09.589 "params": { 00:25:09.589 "name": "Nvme1", 00:25:09.589 "trtype": "tcp", 00:25:09.589 "traddr": "10.0.0.2", 00:25:09.589 "adrfam": "ipv4", 00:25:09.589 "trsvcid": "4420", 00:25:09.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:09.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:09.589 "hdgst": false, 00:25:09.589 "ddgst": false 00:25:09.589 }, 00:25:09.589 "method": "bdev_nvme_attach_controller" 00:25:09.589 },{ 00:25:09.589 "params": { 00:25:09.589 "name": "Nvme2", 00:25:09.589 "trtype": "tcp", 00:25:09.589 "traddr": "10.0.0.2", 00:25:09.589 "adrfam": "ipv4", 00:25:09.589 "trsvcid": "4420", 00:25:09.589 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:09.589 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:09.589 "hdgst": false, 00:25:09.589 "ddgst": false 00:25:09.589 }, 00:25:09.589 "method": "bdev_nvme_attach_controller" 00:25:09.589 },{ 00:25:09.589 "params": { 00:25:09.589 "name": "Nvme3", 00:25:09.589 "trtype": "tcp", 00:25:09.589 "traddr": "10.0.0.2", 00:25:09.589 "adrfam": "ipv4", 00:25:09.589 "trsvcid": "4420", 00:25:09.589 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:09.589 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:09.589 "hdgst": false, 00:25:09.589 "ddgst": false 00:25:09.589 }, 00:25:09.589 "method": "bdev_nvme_attach_controller" 00:25:09.589 },{ 00:25:09.589 "params": { 00:25:09.589 "name": "Nvme4", 00:25:09.589 "trtype": "tcp", 00:25:09.589 "traddr": "10.0.0.2", 00:25:09.589 "adrfam": "ipv4", 00:25:09.589 "trsvcid": "4420", 00:25:09.589 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:09.589 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:09.589 "hdgst": false, 00:25:09.589 "ddgst": false 00:25:09.589 }, 00:25:09.589 "method": "bdev_nvme_attach_controller" 00:25:09.589 },{ 00:25:09.589 "params": { 00:25:09.589 "name": "Nvme5", 00:25:09.589 "trtype": "tcp", 00:25:09.589 "traddr": "10.0.0.2", 00:25:09.589 "adrfam": "ipv4", 00:25:09.589 "trsvcid": "4420", 00:25:09.589 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:09.589 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:09.589 "hdgst": false, 00:25:09.589 "ddgst": false 00:25:09.589 }, 00:25:09.589 "method": "bdev_nvme_attach_controller" 00:25:09.589 },{ 00:25:09.589 "params": { 00:25:09.589 "name": "Nvme6", 00:25:09.589 "trtype": "tcp", 00:25:09.589 "traddr": "10.0.0.2", 00:25:09.589 "adrfam": "ipv4", 00:25:09.589 "trsvcid": "4420", 00:25:09.589 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:09.589 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:09.589 "hdgst": false, 00:25:09.589 "ddgst": false 00:25:09.589 }, 00:25:09.589 "method": "bdev_nvme_attach_controller" 00:25:09.589 },{ 00:25:09.589 "params": { 00:25:09.589 "name": "Nvme7", 00:25:09.589 "trtype": "tcp", 00:25:09.589 "traddr": "10.0.0.2", 00:25:09.589 "adrfam": "ipv4", 00:25:09.589 "trsvcid": "4420", 00:25:09.589 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:09.589 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:09.589 "hdgst": false, 00:25:09.589 "ddgst": false 00:25:09.589 }, 00:25:09.589 "method": "bdev_nvme_attach_controller" 00:25:09.589 },{ 00:25:09.589 "params": { 00:25:09.589 "name": "Nvme8", 00:25:09.589 "trtype": "tcp", 00:25:09.589 "traddr": "10.0.0.2", 00:25:09.589 "adrfam": "ipv4", 00:25:09.589 "trsvcid": "4420", 00:25:09.589 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:09.589 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:09.589 "hdgst": false, 00:25:09.589 "ddgst": false 00:25:09.589 }, 00:25:09.589 "method": "bdev_nvme_attach_controller" 00:25:09.589 },{ 00:25:09.589 "params": { 00:25:09.589 "name": "Nvme9", 00:25:09.589 "trtype": "tcp", 00:25:09.589 "traddr": "10.0.0.2", 00:25:09.589 "adrfam": "ipv4", 00:25:09.589 "trsvcid": "4420", 00:25:09.589 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:09.589 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:09.589 "hdgst": false, 00:25:09.589 "ddgst": false 00:25:09.589 }, 00:25:09.589 "method": "bdev_nvme_attach_controller" 00:25:09.589 },{ 00:25:09.589 "params": { 00:25:09.589 "name": "Nvme10", 00:25:09.589 "trtype": "tcp", 00:25:09.589 "traddr": "10.0.0.2", 00:25:09.589 "adrfam": "ipv4", 00:25:09.589 "trsvcid": "4420", 00:25:09.589 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:09.589 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:09.589 "hdgst": false, 00:25:09.589 "ddgst": false 00:25:09.589 }, 00:25:09.589 "method": "bdev_nvme_attach_controller" 00:25:09.589 }' 00:25:09.589 [2024-12-05 13:56:40.948074] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:09.589 [2024-12-05 13:56:40.948150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291040 ] 00:25:09.589 [2024-12-05 13:56:41.019025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.589 [2024-12-05 13:56:41.077560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.489 Running I/O for 10 seconds... 00:25:11.746 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.746 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:11.746 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:11.746 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.746 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:11.746 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.746 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:11.746 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:11.746 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:11.746 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:11.746 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:25:11.746 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:25:11.746 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:11.747 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:11.747 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:11.747 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:11.747 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.747 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:11.747 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.747 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:11.747 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:11.747 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:12.005 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:12.005 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:12.005 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:12.005 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:12.005 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.005 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:12.005 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.005 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:12.005 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:12.005 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:12.262 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:12.262 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:12.262 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:12.262 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:12.262 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.262 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:12.262 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2290951 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2290951 ']' 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2290951 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2290951 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2290951' 00:25:12.534 killing process with pid 2290951 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2290951 00:25:12.534 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2290951 00:25:12.534 [2024-12-05 13:56:43.837334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.837835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:56:43.837851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.837870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.837872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.837890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.837909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.837910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.837925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.837941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1[2024-12-05 13:56:43.837946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.837960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with [2024-12-05 13:56:43.837961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.534 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.837975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.837980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.837988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.837996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.838000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.838013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with [2024-12-05 13:56:43.838012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1the state(6) to be set 00:25:12.534 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.838027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.838029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 [2024-12-05 13:56:43.838039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.838045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.534 [2024-12-05 13:56:43.838051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.838064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:56:43.838065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.534 the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.838079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.534 [2024-12-05 13:56:43.838081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 [2024-12-05 13:56:43.838092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 [2024-12-05 13:56:43.838128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1[2024-12-05 13:56:43.838152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1[2024-12-05 13:56:43.838191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with [2024-12-05 13:56:43.838206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.535 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 [2024-12-05 13:56:43.838232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with [2024-12-05 13:56:43.838258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1the state(6) to be set 00:25:12.535 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 [2024-12-05 13:56:43.838273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with [2024-12-05 13:56:43.838274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.535 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 [2024-12-05 13:56:43.838300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 [2024-12-05 13:56:43.838337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1[2024-12-05 13:56:43.838362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with [2024-12-05 13:56:43.838377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.535 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 [2024-12-05 13:56:43.838403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:56:43.838424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 [2024-12-05 13:56:43.838454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 [2024-12-05 13:56:43.838486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1[2024-12-05 13:56:43.838511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with [2024-12-05 13:56:43.838524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.535 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 [2024-12-05 13:56:43.838553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1[2024-12-05 13:56:43.838577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with [2024-12-05 13:56:43.838591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.535 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 [2024-12-05 13:56:43.838616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1[2024-12-05 13:56:43.838640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:56:43.838659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271bee0 is same with the state(6) to be set 00:25:12.535 [2024-12-05 13:56:43.838676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 [2024-12-05 13:56:43.838691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.535 [2024-12-05 13:56:43.838705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.535 [2024-12-05 13:56:43.838719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.838734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.838748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.838763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.838777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.838791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.838805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.838820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.838834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.838848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.838862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.838877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.838890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.838905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.838919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.838934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.838948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.838963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.838977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.838999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.839488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.536 [2024-12-05 13:56:43.839502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.536 [2024-12-05 13:56:43.840321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.536 [2024-12-05 13:56:43.840734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.840756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.840827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.840843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.840856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.840867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.840885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.840898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.840910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.840922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.840934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.840946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.840966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.840978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.840990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.841254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3190 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:12.537 [2024-12-05 13:56:43.843449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621700 (9): B[2024-12-05 13:56:43.843558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with ad file descriptor 00:25:12.537 the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.843996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.844008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.844020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.537 [2024-12-05 13:56:43.844032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.844238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3660 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.845775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.538 [2024-12-05 13:56:43.845810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621700 with addr=10.0.0.2, port=4420 00:25:12.538 [2024-12-05 13:56:43.845828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621700 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.845890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.845914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.845933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.845947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.845964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.845980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.845994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.846007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.846019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9a0d0 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.846120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.846144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.846159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.846173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.846186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.846200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.846213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.846225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618930 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.846319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.846334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.846353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.846368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.846381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.846394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.846408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.846431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6156f0 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.846511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.846525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.846539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.846552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.846565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.846578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.538 [2024-12-05 13:56:43.846590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.538 [2024-12-05 13:56:43.846602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620850 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.538 [2024-12-05 13:56:43.846867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.538 [2024-12-05 13:56:43.846878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.846891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with [2024-12-05 13:56:43.846891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.539 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 [2024-12-05 13:56:43.846905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.846914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1[2024-12-05 13:56:43.846917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.846932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with [2024-12-05 13:56:43.846932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.539 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 [2024-12-05 13:56:43.846946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.846950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 [2024-12-05 13:56:43.846958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.846965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 [2024-12-05 13:56:43.846971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.846981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1[2024-12-05 13:56:43.846983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.846997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:56:43.846998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 [2024-12-05 13:56:43.847025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 [2024-12-05 13:56:43.847037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 [2024-12-05 13:56:43.847050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 [2024-12-05 13:56:43.847066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 [2024-12-05 13:56:43.847079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 [2024-12-05 13:56:43.847093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:1[2024-12-05 13:56:43.847106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:56:43.847122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 [2024-12-05 13:56:43.847149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 [2024-12-05 13:56:43.847161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 [2024-12-05 13:56:43.847174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 [2024-12-05 13:56:43.847187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1[2024-12-05 13:56:43.847200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with [2024-12-05 13:56:43.847214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.539 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 [2024-12-05 13:56:43.847228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 [2024-12-05 13:56:43.847240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 [2024-12-05 13:56:43.847256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 [2024-12-05 13:56:43.847269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 [2024-12-05 13:56:43.847282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 [2024-12-05 13:56:43.847295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 [2024-12-05 13:56:43.847308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-12-05 13:56:43.847323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:56:43.847338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 [2024-12-05 13:56:43.847365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:56:43.847378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 [2024-12-05 13:56:43.847404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 [2024-12-05 13:56:43.847424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 [2024-12-05 13:56:43.847439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 [2024-12-05 13:56:43.847457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.539 [2024-12-05 13:56:43.847478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:56:43.847491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.539 the state(6) to be set 00:25:12.539 [2024-12-05 13:56:43.847505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.540 [2024-12-05 13:56:43.847507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.540 [2024-12-05 13:56:43.847521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.540 [2024-12-05 13:56:43.847537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.540 [2024-12-05 13:56:43.847551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with the state(6) to be set 00:25:12.540 [2024-12-05 13:56:43.847566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1[2024-12-05 13:56:43.847567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3eb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 the state(6) to be set 00:25:12.540 [2024-12-05 13:56:43.847582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.847984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.847998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.540 [2024-12-05 13:56:43.848586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.540 [2024-12-05 13:56:43.848600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.541 [2024-12-05 13:56:43.848614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.541 [2024-12-05 13:56:43.848628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.541 [2024-12-05 13:56:43.848642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.541 [2024-12-05 13:56:43.848656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.541 [2024-12-05 13:56:43.848671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.541 [2024-12-05 13:56:43.848684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.541 [2024-12-05 13:56:43.848699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.541 [2024-12-05 13:56:43.848713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.541 [2024-12-05 13:56:43.848728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.541 [2024-12-05 13:56:43.848747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.541 [2024-12-05 13:56:43.848763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.541 [2024-12-05 13:56:43.848777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.541 [2024-12-05 13:56:43.848792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.541 [2024-12-05 13:56:43.848806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.541 [2024-12-05 13:56:43.848821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.541 [2024-12-05 13:56:43.848826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with [2024-12-05 13:56:43.848838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.541 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.541 [2024-12-05 13:56:43.848856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.848870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.848883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.848895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.848906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.848918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.848929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.848941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.848952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.848964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.848975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.848987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.848999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849271] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:12.541 [2024-12-05 13:56:43.849280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621700 (9): Bad file descriptor 00:25:12.541 [2024-12-05 13:56:43.849344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849438] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:12.541 [2024-12-05 13:56:43.849444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.849700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4380 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.850928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.850943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:25:12.541 [2024-12-05 13:56:43.850957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.850971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.850983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.541 [2024-12-05 13:56:43.850985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x620850 (9): Bad file descriptor 00:25:12.541 [2024-12-05 13:56:43.850995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:12.542 [2024-12-05 13:56:43.851019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:12.542 [2024-12-05 13:56:43.851031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:12.542 [2024-12-05 13:56:43.851043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851057] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:12.542 [2024-12-05 13:56:43.851066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.851711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744550 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.852433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.542 [2024-12-05 13:56:43.852473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x620850 with addr=10.0.0.2, port=4420 00:25:12.542 [2024-12-05 13:56:43.852490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620850 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.852564] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:12.542 [2024-12-05 13:56:43.852645] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:12.542 [2024-12-05 13:56:43.852848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.852883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.852898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.852916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.852914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x620850 (9): Bad file descriptor 00:25:12.542 [2024-12-05 13:56:43.852930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.852942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.852954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.852966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.852978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.852990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.853006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.853019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.853023] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:12.542 [2024-12-05 13:56:43.853031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.853043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.853055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.853067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.542 [2024-12-05 13:56:43.853078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with [2024-12-05 13:56:43.853216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in errthe state(6) to be set 00:25:12.543 or state 00:25:12.543 [2024-12-05 13:56:43.853236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:25:12.543 [2024-12-05 13:56:43.853249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:12.543 [2024-12-05 13:56:43.853262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853268] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:25:12.543 [2024-12-05 13:56:43.853274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.543 [2024-12-05 13:56:43.853323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with [2024-12-05 13:56:43.853335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.543 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.543 [2024-12-05 13:56:43.853350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:12[2024-12-05 13:56:43.853362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.543 the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with [2024-12-05 13:56:43.853376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.543 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.543 [2024-12-05 13:56:43.853390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.543 [2024-12-05 13:56:43.853403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.543 [2024-12-05 13:56:43.853415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with [2024-12-05 13:56:43.853434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:12the state(6) to be set 00:25:12.543 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.543 [2024-12-05 13:56:43.853461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with [2024-12-05 13:56:43.853462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.543 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.543 [2024-12-05 13:56:43.853476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.543 [2024-12-05 13:56:43.853488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.543 [2024-12-05 13:56:43.853501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.543 [2024-12-05 13:56:43.853513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with [2024-12-05 13:56:43.853528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.543 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.543 [2024-12-05 13:56:43.853542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.543 [2024-12-05 13:56:43.853555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.543 [2024-12-05 13:56:43.853567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.543 [2024-12-05 13:56:43.853580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:56:43.853592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.543 the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.543 [2024-12-05 13:56:43.853618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.543 [2024-12-05 13:56:43.853630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:12[2024-12-05 13:56:43.853642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.543 the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.543 [2024-12-05 13:56:43.853668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.543 [2024-12-05 13:56:43.853681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.543 [2024-12-05 13:56:43.853693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744a20 is same with the state(6) to be set 00:25:12.543 [2024-12-05 13:56:43.853705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.853719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.853734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.853748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.853763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.853777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.853791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.853805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.853820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.853834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.853849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.853863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.853878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.853893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.853908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.853922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.853937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.853954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.853970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.853984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.853999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.544 [2024-12-05 13:56:43.854456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with [2024-12-05 13:56:43.854474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:12the state(6) to be set 00:25:12.544 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:56:43.854489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 the state(6) to be set 00:25:12.544 [2024-12-05 13:56:43.854505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with [2024-12-05 13:56:43.854506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:12the state(6) to be set 00:25:12.544 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with [2024-12-05 13:56:43.854521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.544 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with [2024-12-05 13:56:43.854539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:12the state(6) to be set 00:25:12.544 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with [2024-12-05 13:56:43.854554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.544 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.544 [2024-12-05 13:56:43.854571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with [2024-12-05 13:56:43.854585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.544 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.544 [2024-12-05 13:56:43.854603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.544 [2024-12-05 13:56:43.854618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.544 [2024-12-05 13:56:43.854640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with [2024-12-05 13:56:43.854641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:12the state(6) to be set 00:25:12.544 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with [2024-12-05 13:56:43.854657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.544 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.544 [2024-12-05 13:56:43.854670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.544 [2024-12-05 13:56:43.854674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.544 [2024-12-05 13:56:43.854682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.854695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.545 [2024-12-05 13:56:43.854707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.854720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with [2024-12-05 13:56:43.854733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:12the state(6) to be set 00:25:12.545 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.545 [2024-12-05 13:56:43.854748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with [2024-12-05 13:56:43.854750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:12.545 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.854762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.545 [2024-12-05 13:56:43.854775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.854790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.545 [2024-12-05 13:56:43.854804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.854817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.545 [2024-12-05 13:56:43.854829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.854842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with [2024-12-05 13:56:43.854855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:12the state(6) to be set 00:25:12.545 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.545 [2024-12-05 13:56:43.854870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.854882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.545 [2024-12-05 13:56:43.854894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.854906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.545 [2024-12-05 13:56:43.854927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.854941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.545 [2024-12-05 13:56:43.854953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.854966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with [2024-12-05 13:56:43.854980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128the state(6) to be set 00:25:12.545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.545 [2024-12-05 13:56:43.854994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.854996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.855007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.545 [2024-12-05 13:56:43.855019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.855032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa250f0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2744ef0 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.855988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.545 [2024-12-05 13:56:43.856013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.856029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.545 [2024-12-05 13:56:43.856042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.856055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.545 [2024-12-05 13:56:43.856069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.856082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.545 [2024-12-05 13:56:43.856095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.545 [2024-12-05 13:56:43.856107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4ce60 is same with the state(6) to be set 00:25:12.545 [2024-12-05 13:56:43.856135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9a0d0 (9): Bad file descriptor 00:25:12.545 [2024-12-05 13:56:43.856187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e760 is same with the state(6) to be set 00:25:12.546 [2024-12-05 13:56:43.856346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96ff0 is same with the state(6) to be set 00:25:12.546 [2024-12-05 13:56:43.856520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x618930 (9): Bad file descriptor 00:25:12.546 [2024-12-05 13:56:43.856577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa44f40 is same with the state(6) to be set 00:25:12.546 [2024-12-05 13:56:43.856732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.546 [2024-12-05 13:56:43.856830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.856842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x589110 is same with the state(6) to be set 00:25:12.546 [2024-12-05 13:56:43.856873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6156f0 (9): Bad file descriptor 00:25:12.546 [2024-12-05 13:56:43.858054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.546 [2024-12-05 13:56:43.858077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.858098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.546 [2024-12-05 13:56:43.858113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.858129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.546 [2024-12-05 13:56:43.858143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.858158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.546 [2024-12-05 13:56:43.858172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.858187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.546 [2024-12-05 13:56:43.858200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.858215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.546 [2024-12-05 13:56:43.858229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.546 [2024-12-05 13:56:43.858243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa263b0 is same with the state(6) to be set 00:25:12.546 [2024-12-05 13:56:43.858487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:25:12.546 [2024-12-05 13:56:43.858526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x589110 (9): Bad file descriptor 00:25:12.546 [2024-12-05 13:56:43.859562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:12.546 [2024-12-05 13:56:43.859590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:25:12.546 [2024-12-05 13:56:43.859614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4ce60 (9): Bad file descriptor 00:25:12.546 [2024-12-05 13:56:43.860169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.546 [2024-12-05 13:56:43.860198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x589110 with addr=10.0.0.2, port=4420 00:25:12.546 [2024-12-05 13:56:43.860215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x589110 is same with the state(6) to be set 00:25:12.546 [2024-12-05 13:56:43.860291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.546 [2024-12-05 13:56:43.860316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621700 with addr=10.0.0.2, port=4420 00:25:12.546 [2024-12-05 13:56:43.860331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621700 is same with the state(6) to be set 00:25:12.546 [2024-12-05 13:56:43.860741] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:12.546 [2024-12-05 13:56:43.860914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.546 [2024-12-05 13:56:43.860940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa4ce60 with addr=10.0.0.2, port=4420 00:25:12.546 [2024-12-05 13:56:43.860963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4ce60 is same with the state(6) to be set 00:25:12.546 [2024-12-05 13:56:43.860982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x589110 (9): Bad file descriptor 00:25:12.546 [2024-12-05 13:56:43.861001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621700 (9): Bad file descriptor 00:25:12.546 [2024-12-05 13:56:43.861117] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:12.546 [2024-12-05 13:56:43.861156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4ce60 (9): Bad file descriptor 00:25:12.546 [2024-12-05 13:56:43.861177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:25:12.546 [2024-12-05 13:56:43.861191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:25:12.546 [2024-12-05 13:56:43.861206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:12.546 [2024-12-05 13:56:43.861220] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:25:12.546 [2024-12-05 13:56:43.861235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:12.546 [2024-12-05 13:56:43.861248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:12.546 [2024-12-05 13:56:43.861260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:12.546 [2024-12-05 13:56:43.861273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:12.546 [2024-12-05 13:56:43.861342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:25:12.546 [2024-12-05 13:56:43.861362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:25:12.546 [2024-12-05 13:56:43.861376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:12.546 [2024-12-05 13:56:43.861388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:25:12.546 [2024-12-05 13:56:43.861692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:25:12.546 [2024-12-05 13:56:43.861848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.547 [2024-12-05 13:56:43.861875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x620850 with addr=10.0.0.2, port=4420 00:25:12.547 [2024-12-05 13:56:43.861892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620850 is same with the state(6) to be set 00:25:12.547 [2024-12-05 13:56:43.861946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x620850 (9): Bad file descriptor 00:25:12.547 [2024-12-05 13:56:43.862000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:25:12.547 [2024-12-05 13:56:43.862016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:25:12.547 [2024-12-05 13:56:43.862029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:12.547 [2024-12-05 13:56:43.862042] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:25:12.547 [2024-12-05 13:56:43.865982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e760 (9): Bad file descriptor 00:25:12.547 [2024-12-05 13:56:43.866034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa96ff0 (9): Bad file descriptor 00:25:12.547 [2024-12-05 13:56:43.866080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa44f40 (9): Bad file descriptor 00:25:12.547 [2024-12-05 13:56:43.866229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.866973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.866987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.867002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.867016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.867036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.867051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.867066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.867080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.867095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.867109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.867124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.867139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.867154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.867168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.547 [2024-12-05 13:56:43.867183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.547 [2024-12-05 13:56:43.867197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.867982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.867995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.868011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.868025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.868041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.868055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.868071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.868085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.868100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.868115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.868130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.868144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.868160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.868178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.868194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.868208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.868222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826460 is same with the state(6) to be set 00:25:12.548 [2024-12-05 13:56:43.869548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.869571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.869591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.869607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.869623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.869638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.869654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.869668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.869684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.869698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.548 [2024-12-05 13:56:43.869714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.548 [2024-12-05 13:56:43.869728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.869745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.869760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.869775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.869789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.869805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.869819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.869834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.869848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.869864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.869883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.869900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.869914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.869929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.869944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.869959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.869973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.869988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.549 [2024-12-05 13:56:43.870905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.549 [2024-12-05 13:56:43.870921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.870935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.870950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.870964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.870979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.870992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.871485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.871499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x827580 is same with the state(6) to be set 00:25:12.550 [2024-12-05 13:56:43.872796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.872819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.872840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.872857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.872873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.872887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.872903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.872917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.872933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.872947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.872962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.872976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.872992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.873021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.873051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.873086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.873116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.873145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.873175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.873204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.873234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.873263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.873292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.873321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.873351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.873380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.550 [2024-12-05 13:56:43.873409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.550 [2024-12-05 13:56:43.873432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.873983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.873997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.551 [2024-12-05 13:56:43.874556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.551 [2024-12-05 13:56:43.874575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.874589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.874604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.874618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.874633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.874647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.874662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.874676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.874692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.874707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.874721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82a8a0 is same with the state(6) to be set 00:25:12.552 [2024-12-05 13:56:43.875930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:25:12.552 [2024-12-05 13:56:43.875963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:25:12.552 [2024-12-05 13:56:43.875986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:25:12.552 [2024-12-05 13:56:43.876409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.552 [2024-12-05 13:56:43.876452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6156f0 with addr=10.0.0.2, port=4420 00:25:12.552 [2024-12-05 13:56:43.876471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6156f0 is same with the state(6) to be set 00:25:12.552 [2024-12-05 13:56:43.876569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.552 [2024-12-05 13:56:43.876594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x618930 with addr=10.0.0.2, port=4420 00:25:12.552 [2024-12-05 13:56:43.876610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618930 is same with the state(6) to be set 00:25:12.552 [2024-12-05 13:56:43.876691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.552 [2024-12-05 13:56:43.876715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa9a0d0 with addr=10.0.0.2, port=4420 00:25:12.552 [2024-12-05 13:56:43.876730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9a0d0 is same with the state(6) to be set 00:25:12.552 [2024-12-05 13:56:43.877619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:12.552 [2024-12-05 13:56:43.877647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:25:12.552 [2024-12-05 13:56:43.877664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:25:12.552 [2024-12-05 13:56:43.877682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:25:12.552 [2024-12-05 13:56:43.877751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6156f0 (9): Bad file descriptor 00:25:12.552 [2024-12-05 13:56:43.877776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x618930 (9): Bad file descriptor 00:25:12.552 [2024-12-05 13:56:43.877795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9a0d0 (9): Bad file descriptor 00:25:12.552 [2024-12-05 13:56:43.877879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.877900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.877922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.877938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.877954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.877969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.877985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.877999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.552 [2024-12-05 13:56:43.878587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-12-05 13:56:43.878601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.878620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.878635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.878650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.878664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.878680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.878694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.878709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.878723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.878738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.878753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.878768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.878782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.878798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.878812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.878828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.878842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.878858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.878872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.878887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.878901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.878917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.878931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.878947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.878961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.878976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.878994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-12-05 13:56:43.879749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.553 [2024-12-05 13:56:43.879765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.879780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.879795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.879809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.879823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23e30 is same with the state(6) to be set 00:25:12.554 [2024-12-05 13:56:43.881072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.881985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.881999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.882015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.882029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.882045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.882059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.882074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.882088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.882104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.882121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.882137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.882151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.882167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-12-05 13:56:43.882180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.554 [2024-12-05 13:56:43.882196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.882975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.882988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.883002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27670 is same with the state(6) to be set 00:25:12.555 [2024-12-05 13:56:43.884229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.884252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.884273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.884288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.884305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.884319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.884335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.884349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.884364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.884378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.884394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.884408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.884437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.884453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.884468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.884483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.884503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.884518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.884534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.884549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.884564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.884578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.884594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.884609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.555 [2024-12-05 13:56:43.884624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.555 [2024-12-05 13:56:43.884638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.884653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.884667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.884682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.884696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.884711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.884725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.884740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.884754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.884770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.884783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.884799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.884813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.884828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.884843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.884858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.884880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.884895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.884910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.884925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.884939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.884954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.884968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.884984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.884998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.556 [2024-12-05 13:56:43.885687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.556 [2024-12-05 13:56:43.885703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.885717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.885733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.885746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.885761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.885776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.885791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.885805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.885820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.885834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.885850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.885863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.885879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.885893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.885908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.885922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.885937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.885951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.885966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.885980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.885999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.886014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.886029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.886043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.886060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.886074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.886089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.886103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.886118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.886132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.886148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.557 [2024-12-05 13:56:43.886162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.557 [2024-12-05 13:56:43.886176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28930 is same with the state(6) to be set 00:25:12.557 [2024-12-05 13:56:43.887842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:25:12.557 [2024-12-05 13:56:43.887878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:25:12.557 task offset: 25344 on job bdev=Nvme1n1 fails 00:25:12.557 00:25:12.557 Latency(us) 00:25:12.557 [2024-12-05T12:56:44.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.557 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.557 Job: Nvme1n1 ended in about 0.89 seconds with error 00:25:12.557 Verification LBA range: start 0x0 length 0x400 00:25:12.557 Nvme1n1 : 0.89 216.44 13.53 72.15 0.00 219210.52 5655.51 250104.79 00:25:12.557 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.557 Job: Nvme2n1 ended in about 0.91 seconds with error 00:25:12.557 Verification LBA range: start 0x0 length 0x400 00:25:12.557 Nvme2n1 : 0.91 144.39 9.02 70.01 0.00 289215.62 19903.53 260978.92 00:25:12.557 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.557 Job: Nvme3n1 ended in about 0.92 seconds with error 00:25:12.557 Verification LBA range: start 0x0 length 0x400 00:25:12.557 Nvme3n1 : 0.92 213.64 13.35 69.76 0.00 214229.72 20388.98 256318.58 00:25:12.557 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.557 Job: Nvme4n1 ended in about 0.90 seconds with error 00:25:12.557 Verification LBA range: start 0x0 length 0x400 00:25:12.557 Nvme4n1 : 0.90 214.41 13.40 71.47 0.00 207522.47 4563.25 264085.81 00:25:12.557 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.557 Job: Nvme5n1 ended in about 0.93 seconds with error 00:25:12.557 Verification LBA range: start 0x0 length 0x400 00:25:12.557 Nvme5n1 : 0.93 138.27 8.64 69.13 0.00 280818.22 36117.62 243891.01 00:25:12.557 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.557 Job: Nvme6n1 ended in about 0.90 seconds with error 00:25:12.557 Verification LBA range: start 0x0 length 0x400 00:25:12.557 Nvme6n1 : 0.90 156.18 9.76 60.92 0.00 261469.69 5048.70 254765.13 00:25:12.557 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.557 Job: Nvme7n1 ended in about 0.90 seconds with error 00:25:12.557 Verification LBA range: start 0x0 length 0x400 00:25:12.557 Nvme7n1 : 0.90 210.12 13.13 6.64 0.00 255284.58 2827.76 264085.81 00:25:12.557 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.557 Job: Nvme8n1 ended in about 0.93 seconds with error 00:25:12.557 Verification LBA range: start 0x0 length 0x400 00:25:12.557 Nvme8n1 : 0.93 142.10 8.88 68.90 0.00 258384.72 18738.44 257872.02 00:25:12.557 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.557 Job: Nvme9n1 ended in about 0.93 seconds with error 00:25:12.557 Verification LBA range: start 0x0 length 0x400 00:25:12.557 Nvme9n1 : 0.93 140.55 8.78 68.66 0.00 255158.22 20291.89 265639.25 00:25:12.557 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.557 Job: Nvme10n1 ended in about 0.92 seconds with error 00:25:12.557 Verification LBA range: start 0x0 length 0x400 00:25:12.557 Nvme10n1 : 0.92 144.47 9.03 69.52 0.00 243211.62 20000.62 287387.50 00:25:12.557 [2024-12-05T12:56:44.083Z] =================================================================================================================== 00:25:12.557 [2024-12-05T12:56:44.083Z] Total : 1720.55 107.53 627.15 0.00 245393.70 2827.76 287387.50 00:25:12.557 [2024-12-05 13:56:43.917376] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:12.557 [2024-12-05 13:56:43.917464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:25:12.557 [2024-12-05 13:56:43.917737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.557 [2024-12-05 13:56:43.917772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621700 with addr=10.0.0.2, port=4420 00:25:12.557 [2024-12-05 13:56:43.917792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621700 is same with the state(6) to be set 00:25:12.557 [2024-12-05 13:56:43.917923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.557 [2024-12-05 13:56:43.917949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x589110 with addr=10.0.0.2, port=4420 00:25:12.557 [2024-12-05 13:56:43.917966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x589110 is same with the state(6) to be set 00:25:12.557 [2024-12-05 13:56:43.918059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.557 [2024-12-05 13:56:43.918087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa4ce60 with addr=10.0.0.2, port=4420 00:25:12.557 [2024-12-05 13:56:43.918103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4ce60 is same with the state(6) to be set 00:25:12.557 [2024-12-05 13:56:43.918185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.557 [2024-12-05 13:56:43.918211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x620850 with addr=10.0.0.2, port=4420 00:25:12.557 [2024-12-05 13:56:43.918227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620850 is same with the state(6) to be set 00:25:12.557 [2024-12-05 13:56:43.918243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:25:12.557 [2024-12-05 13:56:43.918256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:25:12.557 [2024-12-05 13:56:43.918272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:12.557 [2024-12-05 13:56:43.918291] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:25:12.557 [2024-12-05 13:56:43.918320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:25:12.557 [2024-12-05 13:56:43.918334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:25:12.557 [2024-12-05 13:56:43.918347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:12.557 [2024-12-05 13:56:43.918360] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:25:12.557 [2024-12-05 13:56:43.918374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:25:12.558 [2024-12-05 13:56:43.918387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:25:12.558 [2024-12-05 13:56:43.918400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:12.558 [2024-12-05 13:56:43.918413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:25:12.558 [2024-12-05 13:56:43.918507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x620850 (9): Bad file descriptor 00:25:12.558 [2024-12-05 13:56:43.918541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4ce60 (9): Bad file descriptor 00:25:12.558 [2024-12-05 13:56:43.918564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x589110 (9): Bad file descriptor 00:25:12.558 [2024-12-05 13:56:43.918586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621700 (9): Bad file descriptor 00:25:12.558 [2024-12-05 13:56:43.918857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.558 [2024-12-05 13:56:43.918887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa44f40 with addr=10.0.0.2, port=4420 00:25:12.558 [2024-12-05 13:56:43.918904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa44f40 is same with the state(6) to be set 00:25:12.558 [2024-12-05 13:56:43.918986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.558 [2024-12-05 13:56:43.919012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8e760 with addr=10.0.0.2, port=4420 00:25:12.558 [2024-12-05 13:56:43.919028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e760 is same with the state(6) to be set 00:25:12.558 [2024-12-05 13:56:43.919120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.558 [2024-12-05 13:56:43.919146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa96ff0 with addr=10.0.0.2, port=4420 00:25:12.558 [2024-12-05 13:56:43.919162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96ff0 is same with the state(6) to be set 00:25:12.558 [2024-12-05 13:56:43.919200] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:25:12.558 [2024-12-05 13:56:43.919223] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:25:12.558 [2024-12-05 13:56:43.919242] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:25:12.558 [2024-12-05 13:56:43.919261] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:25:12.558 [2024-12-05 13:56:43.919278] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:25:12.558 [2024-12-05 13:56:43.919298] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:25:12.558 [2024-12-05 13:56:43.919316] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:25:12.558 [2024-12-05 13:56:43.920195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:25:12.558 [2024-12-05 13:56:43.920223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:25:12.558 [2024-12-05 13:56:43.920240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:25:12.558 [2024-12-05 13:56:43.920301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa44f40 (9): Bad file descriptor 00:25:12.558 [2024-12-05 13:56:43.920327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8e760 (9): Bad file descriptor 00:25:12.558 [2024-12-05 13:56:43.920346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa96ff0 (9): Bad file descriptor 00:25:12.558 [2024-12-05 13:56:43.920362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:12.558 [2024-12-05 13:56:43.920374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:12.558 [2024-12-05 13:56:43.920388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:12.558 [2024-12-05 13:56:43.920401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:12.558 [2024-12-05 13:56:43.920441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:25:12.558 [2024-12-05 13:56:43.920458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:25:12.558 [2024-12-05 13:56:43.920471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:12.558 [2024-12-05 13:56:43.920483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:25:12.558 [2024-12-05 13:56:43.920497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:25:12.558 [2024-12-05 13:56:43.920509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:25:12.558 [2024-12-05 13:56:43.920521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:12.558 [2024-12-05 13:56:43.920533] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:25:12.558 [2024-12-05 13:56:43.920546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:25:12.558 [2024-12-05 13:56:43.920557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:25:12.558 [2024-12-05 13:56:43.920569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:12.558 [2024-12-05 13:56:43.920581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:25:12.558 [2024-12-05 13:56:43.920745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.558 [2024-12-05 13:56:43.920772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa9a0d0 with addr=10.0.0.2, port=4420 00:25:12.558 [2024-12-05 13:56:43.920788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9a0d0 is same with the state(6) to be set 00:25:12.558 [2024-12-05 13:56:43.920863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.558 [2024-12-05 13:56:43.920887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x618930 with addr=10.0.0.2, port=4420 00:25:12.558 [2024-12-05 13:56:43.920903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618930 is same with the state(6) to be set 00:25:12.558 [2024-12-05 13:56:43.920976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.558 [2024-12-05 13:56:43.921006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6156f0 with addr=10.0.0.2, port=4420 00:25:12.558 [2024-12-05 13:56:43.921022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6156f0 is same with the state(6) to be set 00:25:12.558 [2024-12-05 13:56:43.921037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:25:12.558 [2024-12-05 13:56:43.921049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:25:12.558 [2024-12-05 13:56:43.921062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:25:12.558 [2024-12-05 13:56:43.921075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:25:12.558 [2024-12-05 13:56:43.921089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:25:12.558 [2024-12-05 13:56:43.921101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:25:12.558 [2024-12-05 13:56:43.921113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:25:12.558 [2024-12-05 13:56:43.921125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:25:12.558 [2024-12-05 13:56:43.921137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:25:12.558 [2024-12-05 13:56:43.921149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:25:12.558 [2024-12-05 13:56:43.921161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:25:12.558 [2024-12-05 13:56:43.921173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:25:12.558 [2024-12-05 13:56:43.921243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9a0d0 (9): Bad file descriptor 00:25:12.558 [2024-12-05 13:56:43.921268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x618930 (9): Bad file descriptor 00:25:12.558 [2024-12-05 13:56:43.921286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6156f0 (9): Bad file descriptor 00:25:12.558 [2024-12-05 13:56:43.921330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:25:12.558 [2024-12-05 13:56:43.921348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:25:12.558 [2024-12-05 13:56:43.921362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:12.558 [2024-12-05 13:56:43.921374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:25:12.558 [2024-12-05 13:56:43.921388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:25:12.558 [2024-12-05 13:56:43.921400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:25:12.558 [2024-12-05 13:56:43.921412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:12.558 [2024-12-05 13:56:43.921435] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:25:12.558 [2024-12-05 13:56:43.921450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:25:12.558 [2024-12-05 13:56:43.921462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:25:12.558 [2024-12-05 13:56:43.921474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:12.558 [2024-12-05 13:56:43.921486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:25:13.124 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2291040 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2291040 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2291040 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:25:14.117 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:14.118 rmmod nvme_tcp 00:25:14.118 rmmod nvme_fabrics 00:25:14.118 rmmod nvme_keyring 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2290951 ']' 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2290951 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2290951 ']' 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2290951 00:25:14.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2290951) - No such process 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2290951 is not found' 00:25:14.118 Process with pid 2290951 is not found 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.118 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.020 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:16.020 00:25:16.020 real 0m7.755s 00:25:16.020 user 0m19.728s 00:25:16.020 sys 0m1.528s 00:25:16.020 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:16.020 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:16.020 ************************************ 00:25:16.020 END TEST nvmf_shutdown_tc3 00:25:16.020 ************************************ 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:16.280 ************************************ 00:25:16.280 START TEST nvmf_shutdown_tc4 00:25:16.280 ************************************ 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:16.280 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:16.281 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:16.281 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:16.281 Found net devices under 0000:09:00.0: cvl_0_0 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:16.281 Found net devices under 0000:09:00.1: cvl_0_1 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.281 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:16.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:25:16.282 00:25:16.282 --- 10.0.0.2 ping statistics --- 00:25:16.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.282 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:25:16.282 00:25:16.282 --- 10.0.0.1 ping statistics --- 00:25:16.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.282 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2291953 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2291953 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2291953 ']' 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.282 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:16.540 [2024-12-05 13:56:47.823195] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:16.540 [2024-12-05 13:56:47.823283] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.540 [2024-12-05 13:56:47.898423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:16.540 [2024-12-05 13:56:47.954373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.540 [2024-12-05 13:56:47.954435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.540 [2024-12-05 13:56:47.954467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.540 [2024-12-05 13:56:47.954479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.540 [2024-12-05 13:56:47.954489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.540 [2024-12-05 13:56:47.959441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.540 [2024-12-05 13:56:47.959569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.540 [2024-12-05 13:56:47.959635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:16.540 [2024-12-05 13:56:47.959639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:16.799 [2024-12-05 13:56:48.100054] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.799 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:16.799 Malloc1 00:25:16.799 [2024-12-05 13:56:48.191312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.799 Malloc2 00:25:16.799 Malloc3 00:25:16.799 Malloc4 00:25:17.057 Malloc5 00:25:17.057 Malloc6 00:25:17.057 Malloc7 00:25:17.057 Malloc8 00:25:17.057 Malloc9 00:25:17.315 Malloc10 00:25:17.315 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.315 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:17.315 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:17.315 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:17.315 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2292133 00:25:17.315 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:25:17.315 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:25:17.315 [2024-12-05 13:56:48.697033] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:22.655 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:22.655 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2291953 00:25:22.655 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2291953 ']' 00:25:22.655 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2291953 00:25:22.655 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:25:22.655 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:22.655 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2291953 00:25:22.655 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:22.655 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:22.655 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2291953' 00:25:22.655 killing process with pid 2291953 00:25:22.655 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2291953 00:25:22.655 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2291953 00:25:22.655 [2024-12-05 13:56:53.689950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04d10 is same with the state(6) to be set 00:25:22.655 [2024-12-05 13:56:53.690047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04d10 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.690576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792b60 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.690621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792b60 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.690645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792b60 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.690659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792b60 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.690672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792b60 is same with the state(6) to be set 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 [2024-12-05 13:56:53.691351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04840 is same with the state(6) to be set 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 [2024-12-05 13:56:53.691385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04840 is same with the state(6) to be set 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 [2024-12-05 13:56:53.691602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.656 NVMe io qpair process completion error 00:25:22.656 [2024-12-05 13:56:53.692191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194fe40 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.692223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194fe40 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.692238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194fe40 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.692255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194fe40 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.692268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194fe40 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.692280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194fe40 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.692900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1950310 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.692927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1950310 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.692941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1950310 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.692953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1950310 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.692964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1950310 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.692977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1950310 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.692989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1950310 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.693001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1950310 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.693535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19507e0 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.693570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19507e0 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.693587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19507e0 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.693599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19507e0 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.693612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19507e0 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.693623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19507e0 is same with the state(6) to be set 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 [2024-12-05 13:56:53.698409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ecc50 is same with the state(6) to be set 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 [2024-12-05 13:56:53.698458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ecc50 is same with the state(6) to be set 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 [2024-12-05 13:56:53.698474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ecc50 is same with the state(6) to be set 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 [2024-12-05 13:56:53.698487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ecc50 is same with the state(6) to be set 00:25:22.656 starting I/O failed: -6 00:25:22.656 [2024-12-05 13:56:53.698500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ecc50 is same with the state(6) to be set 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 [2024-12-05 13:56:53.698512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ecc50 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.698524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ecc50 is same with Write completed with error (sct=0, sc=8) 00:25:22.656 the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.698537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ecc50 is same with the state(6) to be set 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 [2024-12-05 13:56:53.698940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.656 [2024-12-05 13:56:53.699068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ec290 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.699099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ec290 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.699114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ec290 is same with Write completed with error (sct=0, sc=8) 00:25:22.656 the state(6) to be set 00:25:22.656 starting I/O failed: -6 00:25:22.656 [2024-12-05 13:56:53.699131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ec290 is same with the state(6) to be set 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 [2024-12-05 13:56:53.699144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ec290 is same with the state(6) to be set 00:25:22.656 [2024-12-05 13:56:53.699156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ec290 is same with the state(6) to be set 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 Write completed with error (sct=0, sc=8) 00:25:22.656 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 [2024-12-05 13:56:53.699973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 [2024-12-05 13:56:53.701097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.657 [2024-12-05 13:56:53.701184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17939f0 is same with the state(6) to be set 00:25:22.657 [2024-12-05 13:56:53.701217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17939f0 is same with Write completed with error (sct=0, sc=8) 00:25:22.657 the state(6) to be set 00:25:22.657 starting I/O failed: -6 00:25:22.657 [2024-12-05 13:56:53.701235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17939f0 is same with the state(6) to be set 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 [2024-12-05 13:56:53.701247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17939f0 is same with the state(6) to be set 00:25:22.657 starting I/O failed: -6 00:25:22.657 [2024-12-05 13:56:53.701259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17939f0 is same with the state(6) to be set 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 [2024-12-05 13:56:53.701272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17939f0 is same with the state(6) to be set 00:25:22.657 starting I/O failed: -6 00:25:22.657 [2024-12-05 13:56:53.701284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17939f0 is same with the state(6) to be set 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 [2024-12-05 13:56:53.701296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17939f0 is same with the state(6) to be set 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 [2024-12-05 13:56:53.701308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17939f0 is same with the state(6) to be set 00:25:22.657 starting I/O failed: -6 00:25:22.657 [2024-12-05 13:56:53.701321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17939f0 is same with the state(6) to be set 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 [2024-12-05 13:56:53.701333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17939f0 is same with the state(6) to be set 00:25:22.657 starting I/O failed: -6 00:25:22.657 [2024-12-05 13:56:53.701345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17939f0 is same with the state(6) to be set 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.657 starting I/O failed: -6 00:25:22.657 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 [2024-12-05 13:56:53.702161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1794390 is same with Write completed with error (sct=0, sc=8) 00:25:22.658 the state(6) to be set 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 [2024-12-05 13:56:53.702195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1794390 is same with the state(6) to be set 00:25:22.658 starting I/O failed: -6 00:25:22.658 [2024-12-05 13:56:53.702211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1794390 is same with the state(6) to be set 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 [2024-12-05 13:56:53.702224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1794390 is same with the state(6) to be set 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 [2024-12-05 13:56:53.702236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1794390 is same with the state(6) to be set 00:25:22.658 starting I/O failed: -6 00:25:22.658 [2024-12-05 13:56:53.702248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1794390 is same with the state(6) to be set 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 [2024-12-05 13:56:53.702260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1794390 is same with the state(6) to be set 00:25:22.658 starting I/O failed: -6 00:25:22.658 [2024-12-05 13:56:53.702273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1794390 is same with Write completed with error (sct=0, sc=8) 00:25:22.658 the state(6) to be set 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 [2024-12-05 13:56:53.702775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.658 NVMe io qpair process completion error 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 [2024-12-05 13:56:53.703879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 [2024-12-05 13:56:53.704935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 Write completed with error (sct=0, sc=8) 00:25:22.658 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 [2024-12-05 13:56:53.706040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 [2024-12-05 13:56:53.707911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.659 NVMe io qpair process completion error 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 starting I/O failed: -6 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.659 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 [2024-12-05 13:56:53.709093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 [2024-12-05 13:56:53.710124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 [2024-12-05 13:56:53.711233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.660 starting I/O failed: -6 00:25:22.660 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 [2024-12-05 13:56:53.713146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.661 NVMe io qpair process completion error 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 [2024-12-05 13:56:53.714392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 Write completed with error (sct=0, sc=8) 00:25:22.661 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 [2024-12-05 13:56:53.715437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 [2024-12-05 13:56:53.716596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.662 Write completed with error (sct=0, sc=8) 00:25:22.662 starting I/O failed: -6 00:25:22.663 [2024-12-05 13:56:53.718283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.663 NVMe io qpair process completion error 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 [2024-12-05 13:56:53.719545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 [2024-12-05 13:56:53.720631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 [2024-12-05 13:56:53.721751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.663 starting I/O failed: -6 00:25:22.663 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 [2024-12-05 13:56:53.725089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.664 NVMe io qpair process completion error 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 [2024-12-05 13:56:53.726394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 Write completed with error (sct=0, sc=8) 00:25:22.664 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 [2024-12-05 13:56:53.727383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 [2024-12-05 13:56:53.728525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.665 Write completed with error (sct=0, sc=8) 00:25:22.665 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 [2024-12-05 13:56:53.731724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.666 NVMe io qpair process completion error 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 [2024-12-05 13:56:53.733043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 [2024-12-05 13:56:53.734023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.666 Write completed with error (sct=0, sc=8) 00:25:22.666 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 [2024-12-05 13:56:53.735244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 starting I/O failed: -6 00:25:22.667 [2024-12-05 13:56:53.737547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.667 NVMe io qpair process completion error 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.667 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 [2024-12-05 13:56:53.739746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.668 NVMe io qpair process completion error 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 [2024-12-05 13:56:53.740964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 [2024-12-05 13:56:53.741929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.668 starting I/O failed: -6 00:25:22.668 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 [2024-12-05 13:56:53.743036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 [2024-12-05 13:56:53.744755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.669 NVMe io qpair process completion error 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 starting I/O failed: -6 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.669 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.670 Write completed with error (sct=0, sc=8) 00:25:22.670 starting I/O failed: -6 00:25:22.671 Write completed with error (sct=0, sc=8) 00:25:22.671 starting I/O failed: -6 00:25:22.671 Write completed with error (sct=0, sc=8) 00:25:22.671 starting I/O failed: -6 00:25:22.671 Write completed with error (sct=0, sc=8) 00:25:22.671 starting I/O failed: -6 00:25:22.671 Write completed with error (sct=0, sc=8) 00:25:22.671 starting I/O failed: -6 00:25:22.671 Write completed with error (sct=0, sc=8) 00:25:22.671 starting I/O failed: -6 00:25:22.671 Write completed with error (sct=0, sc=8) 00:25:22.671 starting I/O failed: -6 00:25:22.671 Write completed with error (sct=0, sc=8) 00:25:22.671 starting I/O failed: -6 00:25:22.671 Write completed with error (sct=0, sc=8) 00:25:22.671 starting I/O failed: -6 00:25:22.671 Write completed with error (sct=0, sc=8) 00:25:22.671 starting I/O failed: -6 00:25:22.671 Write completed with error (sct=0, sc=8) 00:25:22.671 starting I/O failed: -6 00:25:22.671 Write completed with error (sct=0, sc=8) 00:25:22.671 starting I/O failed: -6 00:25:22.671 [2024-12-05 13:56:53.750641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.671 NVMe io qpair process completion error 00:25:22.671 Initializing NVMe Controllers 00:25:22.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:25:22.671 Controller IO queue size 128, less than required. 00:25:22.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:25:22.671 Controller IO queue size 128, less than required. 00:25:22.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:25:22.671 Controller IO queue size 128, less than required. 00:25:22.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:22.671 Controller IO queue size 128, less than required. 00:25:22.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:25:22.671 Controller IO queue size 128, less than required. 00:25:22.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:25:22.671 Controller IO queue size 128, less than required. 00:25:22.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:25:22.671 Controller IO queue size 128, less than required. 00:25:22.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:25:22.671 Controller IO queue size 128, less than required. 00:25:22.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:25:22.671 Controller IO queue size 128, less than required. 00:25:22.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:25:22.671 Controller IO queue size 128, less than required. 00:25:22.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:25:22.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:25:22.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:25:22.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:22.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:25:22.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:25:22.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:25:22.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:25:22.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:25:22.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:25:22.671 Initialization complete. Launching workers. 00:25:22.671 ======================================================== 00:25:22.671 Latency(us) 00:25:22.671 Device Information : IOPS MiB/s Average min max 00:25:22.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1816.08 78.03 70429.00 705.92 134648.98 00:25:22.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1857.66 79.82 68897.97 925.62 123772.57 00:25:22.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1817.13 78.08 70447.60 860.43 137843.39 00:25:22.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1783.28 76.63 71029.67 1090.49 121007.63 00:25:22.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1771.79 76.13 71507.14 1070.30 119580.60 00:25:22.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1755.29 75.42 72200.75 1085.75 120382.65 00:25:22.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1793.10 77.05 70699.73 1054.34 123096.68 00:25:22.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1759.05 75.58 72089.68 966.31 124866.18 00:25:22.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1756.13 75.46 72254.28 869.35 119799.54 00:25:22.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1783.91 76.65 71171.84 797.83 132454.61 00:25:22.671 ======================================================== 00:25:22.671 Total : 17893.43 768.86 71055.95 705.92 137843.39 00:25:22.671 00:25:22.671 [2024-12-05 13:56:53.756065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabbae0 is same with the state(6) to be set 00:25:22.671 [2024-12-05 13:56:53.756164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaba5f0 is same with the state(6) to be set 00:25:22.671 [2024-12-05 13:56:53.756225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab9d10 is same with the state(6) to be set 00:25:22.671 [2024-12-05 13:56:53.756284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabb720 is same with the state(6) to be set 00:25:22.671 [2024-12-05 13:56:53.756340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab99e0 is same with the state(6) to be set 00:25:22.671 [2024-12-05 13:56:53.756399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabb900 is same with the state(6) to be set 00:25:22.671 [2024-12-05 13:56:53.756464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaba2c0 is same with the state(6) to be set 00:25:22.671 [2024-12-05 13:56:53.756522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabac50 is same with the state(6) to be set 00:25:22.671 [2024-12-05 13:56:53.756578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaba920 is same with the state(6) to be set 00:25:22.671 [2024-12-05 13:56:53.756633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab96b0 is same with the state(6) to be set 00:25:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:25:22.932 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2292133 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2292133 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2292133 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:23.871 rmmod nvme_tcp 00:25:23.871 rmmod nvme_fabrics 00:25:23.871 rmmod nvme_keyring 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2291953 ']' 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2291953 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2291953 ']' 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2291953 00:25:23.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2291953) - No such process 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2291953 is not found' 00:25:23.871 Process with pid 2291953 is not found 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.871 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.775 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:25.775 00:25:25.775 real 0m9.696s 00:25:25.775 user 0m23.618s 00:25:25.775 sys 0m5.490s 00:25:25.775 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.775 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:25.775 ************************************ 00:25:25.775 END TEST nvmf_shutdown_tc4 00:25:25.775 ************************************ 00:25:26.034 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:25:26.034 00:25:26.034 real 0m37.333s 00:25:26.034 user 1m40.559s 00:25:26.034 sys 0m11.963s 00:25:26.034 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.034 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:26.034 ************************************ 00:25:26.034 END TEST nvmf_shutdown 00:25:26.034 ************************************ 00:25:26.034 13:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:25:26.034 13:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:26.034 13:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:26.034 13:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:26.035 ************************************ 00:25:26.035 START TEST nvmf_nsid 00:25:26.035 ************************************ 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:25:26.035 * Looking for test storage... 00:25:26.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:26.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.035 --rc genhtml_branch_coverage=1 00:25:26.035 --rc genhtml_function_coverage=1 00:25:26.035 --rc genhtml_legend=1 00:25:26.035 --rc geninfo_all_blocks=1 00:25:26.035 --rc geninfo_unexecuted_blocks=1 00:25:26.035 00:25:26.035 ' 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:26.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.035 --rc genhtml_branch_coverage=1 00:25:26.035 --rc genhtml_function_coverage=1 00:25:26.035 --rc genhtml_legend=1 00:25:26.035 --rc geninfo_all_blocks=1 00:25:26.035 --rc geninfo_unexecuted_blocks=1 00:25:26.035 00:25:26.035 ' 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:26.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.035 --rc genhtml_branch_coverage=1 00:25:26.035 --rc genhtml_function_coverage=1 00:25:26.035 --rc genhtml_legend=1 00:25:26.035 --rc geninfo_all_blocks=1 00:25:26.035 --rc geninfo_unexecuted_blocks=1 00:25:26.035 00:25:26.035 ' 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:26.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.035 --rc genhtml_branch_coverage=1 00:25:26.035 --rc genhtml_function_coverage=1 00:25:26.035 --rc genhtml_legend=1 00:25:26.035 --rc geninfo_all_blocks=1 00:25:26.035 --rc geninfo_unexecuted_blocks=1 00:25:26.035 00:25:26.035 ' 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:26.035 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:26.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:25:26.036 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:28.571 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:28.571 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:28.571 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:28.572 Found net devices under 0000:09:00.0: cvl_0_0 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:28.572 Found net devices under 0000:09:00.1: cvl_0_1 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:28.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:25:28.572 00:25:28.572 --- 10.0.0.2 ping statistics --- 00:25:28.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.572 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:25:28.572 00:25:28.572 --- 10.0.0.1 ping statistics --- 00:25:28.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.572 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2294875 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2294875 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2294875 ']' 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.572 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:28.572 [2024-12-05 13:56:59.890506] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:28.572 [2024-12-05 13:56:59.890600] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.572 [2024-12-05 13:56:59.962494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.572 [2024-12-05 13:57:00.023409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.572 [2024-12-05 13:57:00.023483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.572 [2024-12-05 13:57:00.023514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.572 [2024-12-05 13:57:00.023526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.572 [2024-12-05 13:57:00.023536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.572 [2024-12-05 13:57:00.024136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2294919 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=3a9e60cc-0d47-4d3f-b26a-7fb9bfe77f43 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=6b465274-fa52-4764-93f6-7cbcbb629b65 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=1ea821f8-b83a-433c-8ab6-21e1bf117a3e 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:28.831 null0 00:25:28.831 null1 00:25:28.831 null2 00:25:28.831 [2024-12-05 13:57:00.216177] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.831 [2024-12-05 13:57:00.227860] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:28.831 [2024-12-05 13:57:00.227939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2294919 ] 00:25:28.831 [2024-12-05 13:57:00.240388] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2294919 /var/tmp/tgt2.sock 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2294919 ']' 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:25:28.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.831 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:28.831 [2024-12-05 13:57:00.297799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.090 [2024-12-05 13:57:00.358499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.347 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.347 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:29.347 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:25:29.606 [2024-12-05 13:57:01.034646] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.606 [2024-12-05 13:57:01.050848] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:25:29.606 nvme0n1 nvme0n2 00:25:29.606 nvme1n1 00:25:29.606 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:25:29.606 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:25:29.606 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:30.173 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:25:30.173 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:25:30.173 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:25:30.173 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:25:30.173 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:25:30.173 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:25:30.173 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:25:30.173 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:30.173 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:30.173 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:30.173 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:25:30.173 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:25:30.173 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 3a9e60cc-0d47-4d3f-b26a-7fb9bfe77f43 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3a9e60cc0d474d3fb26a7fb9bfe77f43 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3A9E60CC0D474D3FB26A7FB9BFE77F43 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 3A9E60CC0D474D3FB26A7FB9BFE77F43 == \3\A\9\E\6\0\C\C\0\D\4\7\4\D\3\F\B\2\6\A\7\F\B\9\B\F\E\7\7\F\4\3 ]] 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 6b465274-fa52-4764-93f6-7cbcbb629b65 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6b465274fa52476493f67cbcbb629b65 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6B465274FA52476493F67CBCBB629B65 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 6B465274FA52476493F67CBCBB629B65 == \6\B\4\6\5\2\7\4\F\A\5\2\4\7\6\4\9\3\F\6\7\C\B\C\B\B\6\2\9\B\6\5 ]] 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 1ea821f8-b83a-433c-8ab6-21e1bf117a3e 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:25:31.546 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:25:31.547 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:31.547 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1ea821f8b83a433c8ab621e1bf117a3e 00:25:31.547 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1EA821F8B83A433C8AB621E1BF117A3E 00:25:31.547 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 1EA821F8B83A433C8AB621E1BF117A3E == \1\E\A\8\2\1\F\8\B\8\3\A\4\3\3\C\8\A\B\6\2\1\E\1\B\F\1\1\7\A\3\E ]] 00:25:31.547 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:25:31.547 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:25:31.547 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:25:31.547 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2294919 00:25:31.547 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2294919 ']' 00:25:31.547 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2294919 00:25:31.547 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:31.547 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.547 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2294919 00:25:31.804 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:31.805 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:31.805 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2294919' 00:25:31.805 killing process with pid 2294919 00:25:31.805 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2294919 00:25:31.805 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2294919 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:32.063 rmmod nvme_tcp 00:25:32.063 rmmod nvme_fabrics 00:25:32.063 rmmod nvme_keyring 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2294875 ']' 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2294875 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2294875 ']' 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2294875 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.063 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2294875 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2294875' 00:25:32.321 killing process with pid 2294875 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2294875 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2294875 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.321 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.855 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:34.855 00:25:34.855 real 0m8.535s 00:25:34.855 user 0m8.380s 00:25:34.855 sys 0m2.759s 00:25:34.855 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:34.855 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:34.855 ************************************ 00:25:34.855 END TEST nvmf_nsid 00:25:34.855 ************************************ 00:25:34.855 13:57:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:25:34.855 00:25:34.855 real 11m40.730s 00:25:34.855 user 27m35.588s 00:25:34.855 sys 2m47.835s 00:25:34.855 13:57:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:34.855 13:57:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:34.855 ************************************ 00:25:34.855 END TEST nvmf_target_extra 00:25:34.855 ************************************ 00:25:34.855 13:57:05 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:34.855 13:57:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:34.855 13:57:05 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:34.855 13:57:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:34.855 ************************************ 00:25:34.855 START TEST nvmf_host 00:25:34.855 ************************************ 00:25:34.855 13:57:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:34.855 * Looking for test storage... 00:25:34.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:34.855 13:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:34.855 13:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:34.855 13:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:34.855 13:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:34.855 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:34.855 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:34.855 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:34.855 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:34.855 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:34.855 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:34.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.856 --rc genhtml_branch_coverage=1 00:25:34.856 --rc genhtml_function_coverage=1 00:25:34.856 --rc genhtml_legend=1 00:25:34.856 --rc geninfo_all_blocks=1 00:25:34.856 --rc geninfo_unexecuted_blocks=1 00:25:34.856 00:25:34.856 ' 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:34.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.856 --rc genhtml_branch_coverage=1 00:25:34.856 --rc genhtml_function_coverage=1 00:25:34.856 --rc genhtml_legend=1 00:25:34.856 --rc geninfo_all_blocks=1 00:25:34.856 --rc geninfo_unexecuted_blocks=1 00:25:34.856 00:25:34.856 ' 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:34.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.856 --rc genhtml_branch_coverage=1 00:25:34.856 --rc genhtml_function_coverage=1 00:25:34.856 --rc genhtml_legend=1 00:25:34.856 --rc geninfo_all_blocks=1 00:25:34.856 --rc geninfo_unexecuted_blocks=1 00:25:34.856 00:25:34.856 ' 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:34.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.856 --rc genhtml_branch_coverage=1 00:25:34.856 --rc genhtml_function_coverage=1 00:25:34.856 --rc genhtml_legend=1 00:25:34.856 --rc geninfo_all_blocks=1 00:25:34.856 --rc geninfo_unexecuted_blocks=1 00:25:34.856 00:25:34.856 ' 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:34.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.856 ************************************ 00:25:34.856 START TEST nvmf_multicontroller 00:25:34.856 ************************************ 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:34.856 * Looking for test storage... 00:25:34.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:25:34.856 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.857 --rc genhtml_branch_coverage=1 00:25:34.857 --rc genhtml_function_coverage=1 00:25:34.857 --rc genhtml_legend=1 00:25:34.857 --rc geninfo_all_blocks=1 00:25:34.857 --rc geninfo_unexecuted_blocks=1 00:25:34.857 00:25:34.857 ' 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.857 --rc genhtml_branch_coverage=1 00:25:34.857 --rc genhtml_function_coverage=1 00:25:34.857 --rc genhtml_legend=1 00:25:34.857 --rc geninfo_all_blocks=1 00:25:34.857 --rc geninfo_unexecuted_blocks=1 00:25:34.857 00:25:34.857 ' 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.857 --rc genhtml_branch_coverage=1 00:25:34.857 --rc genhtml_function_coverage=1 00:25:34.857 --rc genhtml_legend=1 00:25:34.857 --rc geninfo_all_blocks=1 00:25:34.857 --rc geninfo_unexecuted_blocks=1 00:25:34.857 00:25:34.857 ' 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.857 --rc genhtml_branch_coverage=1 00:25:34.857 --rc genhtml_function_coverage=1 00:25:34.857 --rc genhtml_legend=1 00:25:34.857 --rc geninfo_all_blocks=1 00:25:34.857 --rc geninfo_unexecuted_blocks=1 00:25:34.857 00:25:34.857 ' 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:34.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:25:34.857 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:37.386 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:37.386 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.386 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:37.386 Found net devices under 0000:09:00.0: cvl_0_0 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:37.387 Found net devices under 0000:09:00.1: cvl_0_1 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:37.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:25:37.387 00:25:37.387 --- 10.0.0.2 ping statistics --- 00:25:37.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.387 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:25:37.387 00:25:37.387 --- 10.0.0.1 ping statistics --- 00:25:37.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.387 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2297924 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2297924 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2297924 ']' 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.387 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.387 [2024-12-05 13:57:08.642801] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:37.387 [2024-12-05 13:57:08.642891] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.387 [2024-12-05 13:57:08.715245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:37.387 [2024-12-05 13:57:08.773790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.387 [2024-12-05 13:57:08.773840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.387 [2024-12-05 13:57:08.773869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.387 [2024-12-05 13:57:08.773880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.387 [2024-12-05 13:57:08.773890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.387 [2024-12-05 13:57:08.775342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:37.387 [2024-12-05 13:57:08.775402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:37.387 [2024-12-05 13:57:08.775406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.646 [2024-12-05 13:57:08.948525] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.646 Malloc0 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.646 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.646 [2024-12-05 13:57:09.013408] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.646 [2024-12-05 13:57:09.021291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.646 Malloc1 00:25:37.646 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2298012 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2298012 /var/tmp/bdevperf.sock 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2298012 ']' 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:37.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.647 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.905 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.905 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:37.905 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:37.905 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.905 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.185 NVMe0n1 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.185 1 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.185 request: 00:25:38.185 { 00:25:38.185 "name": "NVMe0", 00:25:38.185 "trtype": "tcp", 00:25:38.185 "traddr": "10.0.0.2", 00:25:38.185 "adrfam": "ipv4", 00:25:38.185 "trsvcid": "4420", 00:25:38.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.185 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:38.185 "hostaddr": "10.0.0.1", 00:25:38.185 "prchk_reftag": false, 00:25:38.185 "prchk_guard": false, 00:25:38.185 "hdgst": false, 00:25:38.185 "ddgst": false, 00:25:38.185 "allow_unrecognized_csi": false, 00:25:38.185 "method": "bdev_nvme_attach_controller", 00:25:38.185 "req_id": 1 00:25:38.185 } 00:25:38.185 Got JSON-RPC error response 00:25:38.185 response: 00:25:38.185 { 00:25:38.185 "code": -114, 00:25:38.185 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:38.185 } 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.185 request: 00:25:38.185 { 00:25:38.185 "name": "NVMe0", 00:25:38.185 "trtype": "tcp", 00:25:38.185 "traddr": "10.0.0.2", 00:25:38.185 "adrfam": "ipv4", 00:25:38.185 "trsvcid": "4420", 00:25:38.185 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:38.185 "hostaddr": "10.0.0.1", 00:25:38.185 "prchk_reftag": false, 00:25:38.185 "prchk_guard": false, 00:25:38.185 "hdgst": false, 00:25:38.185 "ddgst": false, 00:25:38.185 "allow_unrecognized_csi": false, 00:25:38.185 "method": "bdev_nvme_attach_controller", 00:25:38.185 "req_id": 1 00:25:38.185 } 00:25:38.185 Got JSON-RPC error response 00:25:38.185 response: 00:25:38.185 { 00:25:38.185 "code": -114, 00:25:38.185 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:38.185 } 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.185 request: 00:25:38.185 { 00:25:38.185 "name": "NVMe0", 00:25:38.185 "trtype": "tcp", 00:25:38.185 "traddr": "10.0.0.2", 00:25:38.185 "adrfam": "ipv4", 00:25:38.185 "trsvcid": "4420", 00:25:38.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.185 "hostaddr": "10.0.0.1", 00:25:38.185 "prchk_reftag": false, 00:25:38.185 "prchk_guard": false, 00:25:38.185 "hdgst": false, 00:25:38.185 "ddgst": false, 00:25:38.185 "multipath": "disable", 00:25:38.185 "allow_unrecognized_csi": false, 00:25:38.185 "method": "bdev_nvme_attach_controller", 00:25:38.185 "req_id": 1 00:25:38.185 } 00:25:38.185 Got JSON-RPC error response 00:25:38.185 response: 00:25:38.185 { 00:25:38.185 "code": -114, 00:25:38.185 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:25:38.185 } 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:38.185 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.186 request: 00:25:38.186 { 00:25:38.186 "name": "NVMe0", 00:25:38.186 "trtype": "tcp", 00:25:38.186 "traddr": "10.0.0.2", 00:25:38.186 "adrfam": "ipv4", 00:25:38.186 "trsvcid": "4420", 00:25:38.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.186 "hostaddr": "10.0.0.1", 00:25:38.186 "prchk_reftag": false, 00:25:38.186 "prchk_guard": false, 00:25:38.186 "hdgst": false, 00:25:38.186 "ddgst": false, 00:25:38.186 "multipath": "failover", 00:25:38.186 "allow_unrecognized_csi": false, 00:25:38.186 "method": "bdev_nvme_attach_controller", 00:25:38.186 "req_id": 1 00:25:38.186 } 00:25:38.186 Got JSON-RPC error response 00:25:38.186 response: 00:25:38.186 { 00:25:38.186 "code": -114, 00:25:38.186 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:38.186 } 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.186 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.443 NVMe0n1 00:25:38.443 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.443 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:38.443 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.443 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.443 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.443 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:38.443 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.443 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.701 00:25:38.701 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.701 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:38.701 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:38.701 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.701 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.701 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.701 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:38.701 13:57:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:39.636 { 00:25:39.636 "results": [ 00:25:39.636 { 00:25:39.636 "job": "NVMe0n1", 00:25:39.636 "core_mask": "0x1", 00:25:39.636 "workload": "write", 00:25:39.636 "status": "finished", 00:25:39.636 "queue_depth": 128, 00:25:39.636 "io_size": 4096, 00:25:39.636 "runtime": 1.010306, 00:25:39.636 "iops": 18480.539559301837, 00:25:39.636 "mibps": 72.1896076535228, 00:25:39.636 "io_failed": 0, 00:25:39.636 "io_timeout": 0, 00:25:39.636 "avg_latency_us": 6915.140858173796, 00:25:39.636 "min_latency_us": 6019.602962962963, 00:25:39.636 "max_latency_us": 13786.832592592593 00:25:39.636 } 00:25:39.636 ], 00:25:39.636 "core_count": 1 00:25:39.636 } 00:25:39.636 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:39.636 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.636 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:39.636 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.636 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:25:39.636 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2298012 00:25:39.636 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2298012 ']' 00:25:39.636 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2298012 00:25:39.636 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:39.636 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.636 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2298012 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2298012' 00:25:39.894 killing process with pid 2298012 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2298012 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2298012 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:25:39.894 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:25:39.894 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:39.894 [2024-12-05 13:57:09.123345] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:39.894 [2024-12-05 13:57:09.123469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2298012 ] 00:25:39.894 [2024-12-05 13:57:09.197285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.894 [2024-12-05 13:57:09.257440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.894 [2024-12-05 13:57:09.971162] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name f53b62af-efc4-4bf6-bb31-da4c7955e063 already exists 00:25:39.894 [2024-12-05 13:57:09.971200] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:f53b62af-efc4-4bf6-bb31-da4c7955e063 alias for bdev NVMe1n1 00:25:39.894 [2024-12-05 13:57:09.971230] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:39.894 Running I/O for 1 seconds... 00:25:39.894 18416.00 IOPS, 71.94 MiB/s 00:25:39.894 Latency(us) 00:25:39.894 [2024-12-05T12:57:11.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.894 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:39.894 NVMe0n1 : 1.01 18480.54 72.19 0.00 0.00 6915.14 6019.60 13786.83 00:25:39.894 [2024-12-05T12:57:11.420Z] =================================================================================================================== 00:25:39.894 [2024-12-05T12:57:11.420Z] Total : 18480.54 72.19 0.00 0.00 6915.14 6019.60 13786.83 00:25:39.894 Received shutdown signal, test time was about 1.000000 seconds 00:25:39.894 00:25:39.894 Latency(us) 00:25:39.894 [2024-12-05T12:57:11.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.894 [2024-12-05T12:57:11.421Z] =================================================================================================================== 00:25:39.895 [2024-12-05T12:57:11.421Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.895 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:39.895 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:39.895 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:39.895 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:25:39.895 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:39.895 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:25:39.895 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:39.895 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:25:39.895 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:39.895 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:39.895 rmmod nvme_tcp 00:25:40.152 rmmod nvme_fabrics 00:25:40.152 rmmod nvme_keyring 00:25:40.152 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:40.152 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:25:40.152 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:25:40.152 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2297924 ']' 00:25:40.152 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2297924 00:25:40.152 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2297924 ']' 00:25:40.152 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2297924 00:25:40.152 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:40.152 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.152 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2297924 00:25:40.152 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:40.152 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:40.153 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2297924' 00:25:40.153 killing process with pid 2297924 00:25:40.153 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2297924 00:25:40.153 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2297924 00:25:40.412 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:40.412 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:40.412 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:40.412 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:25:40.412 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:25:40.412 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:40.412 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:25:40.412 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:40.412 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:40.412 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.412 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.412 13:57:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.318 13:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:42.318 00:25:42.318 real 0m7.691s 00:25:42.318 user 0m12.233s 00:25:42.318 sys 0m2.386s 00:25:42.318 13:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:42.318 13:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:42.318 ************************************ 00:25:42.318 END TEST nvmf_multicontroller 00:25:42.318 ************************************ 00:25:42.576 13:57:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:42.576 13:57:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:42.576 13:57:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:42.576 13:57:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.576 ************************************ 00:25:42.576 START TEST nvmf_aer 00:25:42.576 ************************************ 00:25:42.576 13:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:42.576 * Looking for test storage... 00:25:42.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.576 13:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:42.576 13:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:25:42.576 13:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:42.576 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:42.576 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.576 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.576 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.576 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.576 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.576 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:42.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.577 --rc genhtml_branch_coverage=1 00:25:42.577 --rc genhtml_function_coverage=1 00:25:42.577 --rc genhtml_legend=1 00:25:42.577 --rc geninfo_all_blocks=1 00:25:42.577 --rc geninfo_unexecuted_blocks=1 00:25:42.577 00:25:42.577 ' 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:42.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.577 --rc genhtml_branch_coverage=1 00:25:42.577 --rc genhtml_function_coverage=1 00:25:42.577 --rc genhtml_legend=1 00:25:42.577 --rc geninfo_all_blocks=1 00:25:42.577 --rc geninfo_unexecuted_blocks=1 00:25:42.577 00:25:42.577 ' 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:42.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.577 --rc genhtml_branch_coverage=1 00:25:42.577 --rc genhtml_function_coverage=1 00:25:42.577 --rc genhtml_legend=1 00:25:42.577 --rc geninfo_all_blocks=1 00:25:42.577 --rc geninfo_unexecuted_blocks=1 00:25:42.577 00:25:42.577 ' 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:42.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.577 --rc genhtml_branch_coverage=1 00:25:42.577 --rc genhtml_function_coverage=1 00:25:42.577 --rc genhtml_legend=1 00:25:42.577 --rc geninfo_all_blocks=1 00:25:42.577 --rc geninfo_unexecuted_blocks=1 00:25:42.577 00:25:42.577 ' 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:42.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:42.577 13:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:45.111 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:45.111 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:45.111 Found net devices under 0000:09:00.0: cvl_0_0 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:45.111 Found net devices under 0000:09:00.1: cvl_0_1 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.111 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:45.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:25:45.112 00:25:45.112 --- 10.0.0.2 ping statistics --- 00:25:45.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.112 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:25:45.112 00:25:45.112 --- 10.0.0.1 ping statistics --- 00:25:45.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.112 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2300320 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2300320 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2300320 ']' 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.112 [2024-12-05 13:57:16.318139] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:45.112 [2024-12-05 13:57:16.318236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.112 [2024-12-05 13:57:16.396001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:45.112 [2024-12-05 13:57:16.457022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.112 [2024-12-05 13:57:16.457069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.112 [2024-12-05 13:57:16.457098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.112 [2024-12-05 13:57:16.457110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.112 [2024-12-05 13:57:16.457120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.112 [2024-12-05 13:57:16.458835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.112 [2024-12-05 13:57:16.458903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.112 [2024-12-05 13:57:16.458971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:45.112 [2024-12-05 13:57:16.458974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.112 [2024-12-05 13:57:16.613668] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.112 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.371 Malloc0 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.371 [2024-12-05 13:57:16.685670] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.371 [ 00:25:45.371 { 00:25:45.371 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:45.371 "subtype": "Discovery", 00:25:45.371 "listen_addresses": [], 00:25:45.371 "allow_any_host": true, 00:25:45.371 "hosts": [] 00:25:45.371 }, 00:25:45.371 { 00:25:45.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.371 "subtype": "NVMe", 00:25:45.371 "listen_addresses": [ 00:25:45.371 { 00:25:45.371 "trtype": "TCP", 00:25:45.371 "adrfam": "IPv4", 00:25:45.371 "traddr": "10.0.0.2", 00:25:45.371 "trsvcid": "4420" 00:25:45.371 } 00:25:45.371 ], 00:25:45.371 "allow_any_host": true, 00:25:45.371 "hosts": [], 00:25:45.371 "serial_number": "SPDK00000000000001", 00:25:45.371 "model_number": "SPDK bdev Controller", 00:25:45.371 "max_namespaces": 2, 00:25:45.371 "min_cntlid": 1, 00:25:45.371 "max_cntlid": 65519, 00:25:45.371 "namespaces": [ 00:25:45.371 { 00:25:45.371 "nsid": 1, 00:25:45.371 "bdev_name": "Malloc0", 00:25:45.371 "name": "Malloc0", 00:25:45.371 "nguid": "51B3A47ACC0D46EA917B9D97B11382C4", 00:25:45.371 "uuid": "51b3a47a-cc0d-46ea-917b-9d97b11382c4" 00:25:45.371 } 00:25:45.371 ] 00:25:45.371 } 00:25:45.371 ] 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2300463 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:25:45.371 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:45.630 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:45.630 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:25:45.630 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:25:45.630 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.630 Malloc1 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.630 [ 00:25:45.630 { 00:25:45.630 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:45.630 "subtype": "Discovery", 00:25:45.630 "listen_addresses": [], 00:25:45.630 "allow_any_host": true, 00:25:45.630 "hosts": [] 00:25:45.630 }, 00:25:45.630 { 00:25:45.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.630 "subtype": "NVMe", 00:25:45.630 "listen_addresses": [ 00:25:45.630 { 00:25:45.630 "trtype": "TCP", 00:25:45.630 "adrfam": "IPv4", 00:25:45.630 "traddr": "10.0.0.2", 00:25:45.630 "trsvcid": "4420" 00:25:45.630 } 00:25:45.630 ], 00:25:45.630 "allow_any_host": true, 00:25:45.630 "hosts": [], 00:25:45.630 "serial_number": "SPDK00000000000001", 00:25:45.630 "model_number": "SPDK bdev Controller", 00:25:45.630 "max_namespaces": 2, 00:25:45.630 "min_cntlid": 1, 00:25:45.630 "max_cntlid": 65519, 00:25:45.630 "namespaces": [ 00:25:45.630 { 00:25:45.630 "nsid": 1, 00:25:45.630 "bdev_name": "Malloc0", 00:25:45.630 "name": "Malloc0", 00:25:45.630 "nguid": "51B3A47ACC0D46EA917B9D97B11382C4", 00:25:45.630 "uuid": "51b3a47a-cc0d-46ea-917b-9d97b11382c4" 00:25:45.630 }, 00:25:45.630 { 00:25:45.630 "nsid": 2, 00:25:45.630 "bdev_name": "Malloc1", 00:25:45.630 "name": "Malloc1", 00:25:45.630 "nguid": "36C84D99FA4C4DBCB73C77C061BA2A73", 00:25:45.630 "uuid": "36c84d99-fa4c-4dbc-b73c-77c061ba2a73" 00:25:45.630 } 00:25:45.630 ] 00:25:45.630 } 00:25:45.630 ] 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2300463 00:25:45.630 Asynchronous Event Request test 00:25:45.630 Attaching to 10.0.0.2 00:25:45.630 Attached to 10.0.0.2 00:25:45.630 Registering asynchronous event callbacks... 00:25:45.630 Starting namespace attribute notice tests for all controllers... 00:25:45.630 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:45.630 aer_cb - Changed Namespace 00:25:45.630 Cleaning up... 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.630 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:45.889 rmmod nvme_tcp 00:25:45.889 rmmod nvme_fabrics 00:25:45.889 rmmod nvme_keyring 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2300320 ']' 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2300320 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2300320 ']' 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2300320 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2300320 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2300320' 00:25:45.889 killing process with pid 2300320 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2300320 00:25:45.889 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2300320 00:25:46.149 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:46.149 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:46.149 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:46.149 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:25:46.149 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:25:46.149 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:46.149 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:25:46.149 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:46.149 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:46.149 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.149 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.149 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.057 13:57:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:48.057 00:25:48.058 real 0m5.667s 00:25:48.058 user 0m4.886s 00:25:48.058 sys 0m2.047s 00:25:48.058 13:57:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:48.058 13:57:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:48.058 ************************************ 00:25:48.058 END TEST nvmf_aer 00:25:48.058 ************************************ 00:25:48.058 13:57:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:48.058 13:57:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:48.058 13:57:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:48.058 13:57:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.317 ************************************ 00:25:48.317 START TEST nvmf_async_init 00:25:48.317 ************************************ 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:48.317 * Looking for test storage... 00:25:48.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:48.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.317 --rc genhtml_branch_coverage=1 00:25:48.317 --rc genhtml_function_coverage=1 00:25:48.317 --rc genhtml_legend=1 00:25:48.317 --rc geninfo_all_blocks=1 00:25:48.317 --rc geninfo_unexecuted_blocks=1 00:25:48.317 00:25:48.317 ' 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:48.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.317 --rc genhtml_branch_coverage=1 00:25:48.317 --rc genhtml_function_coverage=1 00:25:48.317 --rc genhtml_legend=1 00:25:48.317 --rc geninfo_all_blocks=1 00:25:48.317 --rc geninfo_unexecuted_blocks=1 00:25:48.317 00:25:48.317 ' 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:48.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.317 --rc genhtml_branch_coverage=1 00:25:48.317 --rc genhtml_function_coverage=1 00:25:48.317 --rc genhtml_legend=1 00:25:48.317 --rc geninfo_all_blocks=1 00:25:48.317 --rc geninfo_unexecuted_blocks=1 00:25:48.317 00:25:48.317 ' 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:48.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.317 --rc genhtml_branch_coverage=1 00:25:48.317 --rc genhtml_function_coverage=1 00:25:48.317 --rc genhtml_legend=1 00:25:48.317 --rc geninfo_all_blocks=1 00:25:48.317 --rc geninfo_unexecuted_blocks=1 00:25:48.317 00:25:48.317 ' 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.317 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:48.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=bba902aa6049414ea740deecda8e9639 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:25:48.318 13:57:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.905 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:50.906 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:50.906 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:50.906 Found net devices under 0000:09:00.0: cvl_0_0 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:50.906 Found net devices under 0000:09:00.1: cvl_0_1 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:50.906 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.906 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.906 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.906 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:50.906 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:50.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:25:50.907 00:25:50.907 --- 10.0.0.2 ping statistics --- 00:25:50.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.907 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:25:50.907 00:25:50.907 --- 10.0.0.1 ping statistics --- 00:25:50.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.907 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2302500 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2302500 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2302500 ']' 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.907 [2024-12-05 13:57:22.105240] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:50.907 [2024-12-05 13:57:22.105326] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.907 [2024-12-05 13:57:22.185234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.907 [2024-12-05 13:57:22.241046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.907 [2024-12-05 13:57:22.241105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.907 [2024-12-05 13:57:22.241118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.907 [2024-12-05 13:57:22.241129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.907 [2024-12-05 13:57:22.241139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.907 [2024-12-05 13:57:22.241771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.907 [2024-12-05 13:57:22.394274] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.907 null0 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g bba902aa6049414ea740deecda8e9639 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.907 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.163 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.163 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:51.163 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.163 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.163 [2024-12-05 13:57:22.434595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.163 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.163 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:51.163 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.163 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.163 nvme0n1 00:25:51.163 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.163 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:51.163 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.163 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.163 [ 00:25:51.163 { 00:25:51.163 "name": "nvme0n1", 00:25:51.163 "aliases": [ 00:25:51.163 "bba902aa-6049-414e-a740-deecda8e9639" 00:25:51.163 ], 00:25:51.163 "product_name": "NVMe disk", 00:25:51.163 "block_size": 512, 00:25:51.163 "num_blocks": 2097152, 00:25:51.163 "uuid": "bba902aa-6049-414e-a740-deecda8e9639", 00:25:51.163 "numa_id": 0, 00:25:51.163 "assigned_rate_limits": { 00:25:51.163 "rw_ios_per_sec": 0, 00:25:51.163 "rw_mbytes_per_sec": 0, 00:25:51.163 "r_mbytes_per_sec": 0, 00:25:51.163 "w_mbytes_per_sec": 0 00:25:51.163 }, 00:25:51.163 "claimed": false, 00:25:51.163 "zoned": false, 00:25:51.163 "supported_io_types": { 00:25:51.163 "read": true, 00:25:51.163 "write": true, 00:25:51.163 "unmap": false, 00:25:51.163 "flush": true, 00:25:51.163 "reset": true, 00:25:51.163 "nvme_admin": true, 00:25:51.163 "nvme_io": true, 00:25:51.163 "nvme_io_md": false, 00:25:51.163 "write_zeroes": true, 00:25:51.163 "zcopy": false, 00:25:51.163 "get_zone_info": false, 00:25:51.163 "zone_management": false, 00:25:51.163 "zone_append": false, 00:25:51.163 "compare": true, 00:25:51.163 "compare_and_write": true, 00:25:51.163 "abort": true, 00:25:51.163 "seek_hole": false, 00:25:51.163 "seek_data": false, 00:25:51.163 "copy": true, 00:25:51.163 "nvme_iov_md": false 00:25:51.163 }, 00:25:51.163 "memory_domains": [ 00:25:51.163 { 00:25:51.163 "dma_device_id": "system", 00:25:51.163 "dma_device_type": 1 00:25:51.163 } 00:25:51.163 ], 00:25:51.163 "driver_specific": { 00:25:51.163 "nvme": [ 00:25:51.163 { 00:25:51.163 "trid": { 00:25:51.163 "trtype": "TCP", 00:25:51.163 "adrfam": "IPv4", 00:25:51.163 "traddr": "10.0.0.2", 00:25:51.163 "trsvcid": "4420", 00:25:51.163 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:51.163 }, 00:25:51.163 "ctrlr_data": { 00:25:51.163 "cntlid": 1, 00:25:51.163 "vendor_id": "0x8086", 00:25:51.163 "model_number": "SPDK bdev Controller", 00:25:51.163 "serial_number": "00000000000000000000", 00:25:51.163 "firmware_revision": "25.01", 00:25:51.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:51.163 "oacs": { 00:25:51.163 "security": 0, 00:25:51.163 "format": 0, 00:25:51.163 "firmware": 0, 00:25:51.163 "ns_manage": 0 00:25:51.163 }, 00:25:51.163 "multi_ctrlr": true, 00:25:51.163 "ana_reporting": false 00:25:51.163 }, 00:25:51.163 "vs": { 00:25:51.163 "nvme_version": "1.3" 00:25:51.163 }, 00:25:51.163 "ns_data": { 00:25:51.163 "id": 1, 00:25:51.163 "can_share": true 00:25:51.163 } 00:25:51.163 } 00:25:51.163 ], 00:25:51.163 "mp_policy": "active_passive" 00:25:51.164 } 00:25:51.164 } 00:25:51.164 ] 00:25:51.164 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.164 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:51.164 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.164 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.164 [2024-12-05 13:57:22.683621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:51.164 [2024-12-05 13:57:22.683713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x262fb00 (9): Bad file descriptor 00:25:51.420 [2024-12-05 13:57:22.815554] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:25:51.420 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.420 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:51.420 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.420 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.420 [ 00:25:51.420 { 00:25:51.420 "name": "nvme0n1", 00:25:51.420 "aliases": [ 00:25:51.420 "bba902aa-6049-414e-a740-deecda8e9639" 00:25:51.420 ], 00:25:51.420 "product_name": "NVMe disk", 00:25:51.420 "block_size": 512, 00:25:51.420 "num_blocks": 2097152, 00:25:51.420 "uuid": "bba902aa-6049-414e-a740-deecda8e9639", 00:25:51.420 "numa_id": 0, 00:25:51.420 "assigned_rate_limits": { 00:25:51.420 "rw_ios_per_sec": 0, 00:25:51.420 "rw_mbytes_per_sec": 0, 00:25:51.420 "r_mbytes_per_sec": 0, 00:25:51.420 "w_mbytes_per_sec": 0 00:25:51.420 }, 00:25:51.420 "claimed": false, 00:25:51.420 "zoned": false, 00:25:51.420 "supported_io_types": { 00:25:51.420 "read": true, 00:25:51.420 "write": true, 00:25:51.420 "unmap": false, 00:25:51.420 "flush": true, 00:25:51.420 "reset": true, 00:25:51.420 "nvme_admin": true, 00:25:51.420 "nvme_io": true, 00:25:51.420 "nvme_io_md": false, 00:25:51.420 "write_zeroes": true, 00:25:51.420 "zcopy": false, 00:25:51.420 "get_zone_info": false, 00:25:51.420 "zone_management": false, 00:25:51.420 "zone_append": false, 00:25:51.420 "compare": true, 00:25:51.420 "compare_and_write": true, 00:25:51.420 "abort": true, 00:25:51.420 "seek_hole": false, 00:25:51.420 "seek_data": false, 00:25:51.420 "copy": true, 00:25:51.420 "nvme_iov_md": false 00:25:51.420 }, 00:25:51.420 "memory_domains": [ 00:25:51.420 { 00:25:51.420 "dma_device_id": "system", 00:25:51.420 "dma_device_type": 1 00:25:51.420 } 00:25:51.420 ], 00:25:51.420 "driver_specific": { 00:25:51.420 "nvme": [ 00:25:51.420 { 00:25:51.420 "trid": { 00:25:51.420 "trtype": "TCP", 00:25:51.420 "adrfam": "IPv4", 00:25:51.420 "traddr": "10.0.0.2", 00:25:51.420 "trsvcid": "4420", 00:25:51.420 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:51.420 }, 00:25:51.420 "ctrlr_data": { 00:25:51.420 "cntlid": 2, 00:25:51.420 "vendor_id": "0x8086", 00:25:51.420 "model_number": "SPDK bdev Controller", 00:25:51.420 "serial_number": "00000000000000000000", 00:25:51.421 "firmware_revision": "25.01", 00:25:51.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:51.421 "oacs": { 00:25:51.421 "security": 0, 00:25:51.421 "format": 0, 00:25:51.421 "firmware": 0, 00:25:51.421 "ns_manage": 0 00:25:51.421 }, 00:25:51.421 "multi_ctrlr": true, 00:25:51.421 "ana_reporting": false 00:25:51.421 }, 00:25:51.421 "vs": { 00:25:51.421 "nvme_version": "1.3" 00:25:51.421 }, 00:25:51.421 "ns_data": { 00:25:51.421 "id": 1, 00:25:51.421 "can_share": true 00:25:51.421 } 00:25:51.421 } 00:25:51.421 ], 00:25:51.421 "mp_policy": "active_passive" 00:25:51.421 } 00:25:51.421 } 00:25:51.421 ] 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6YGzjDQQaj 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6YGzjDQQaj 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.6YGzjDQQaj 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.421 [2024-12-05 13:57:22.868266] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:51.421 [2024-12-05 13:57:22.868431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.421 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.421 [2024-12-05 13:57:22.884313] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:51.715 nvme0n1 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.715 [ 00:25:51.715 { 00:25:51.715 "name": "nvme0n1", 00:25:51.715 "aliases": [ 00:25:51.715 "bba902aa-6049-414e-a740-deecda8e9639" 00:25:51.715 ], 00:25:51.715 "product_name": "NVMe disk", 00:25:51.715 "block_size": 512, 00:25:51.715 "num_blocks": 2097152, 00:25:51.715 "uuid": "bba902aa-6049-414e-a740-deecda8e9639", 00:25:51.715 "numa_id": 0, 00:25:51.715 "assigned_rate_limits": { 00:25:51.715 "rw_ios_per_sec": 0, 00:25:51.715 "rw_mbytes_per_sec": 0, 00:25:51.715 "r_mbytes_per_sec": 0, 00:25:51.715 "w_mbytes_per_sec": 0 00:25:51.715 }, 00:25:51.715 "claimed": false, 00:25:51.715 "zoned": false, 00:25:51.715 "supported_io_types": { 00:25:51.715 "read": true, 00:25:51.715 "write": true, 00:25:51.715 "unmap": false, 00:25:51.715 "flush": true, 00:25:51.715 "reset": true, 00:25:51.715 "nvme_admin": true, 00:25:51.715 "nvme_io": true, 00:25:51.715 "nvme_io_md": false, 00:25:51.715 "write_zeroes": true, 00:25:51.715 "zcopy": false, 00:25:51.715 "get_zone_info": false, 00:25:51.715 "zone_management": false, 00:25:51.715 "zone_append": false, 00:25:51.715 "compare": true, 00:25:51.715 "compare_and_write": true, 00:25:51.715 "abort": true, 00:25:51.715 "seek_hole": false, 00:25:51.715 "seek_data": false, 00:25:51.715 "copy": true, 00:25:51.715 "nvme_iov_md": false 00:25:51.715 }, 00:25:51.715 "memory_domains": [ 00:25:51.715 { 00:25:51.715 "dma_device_id": "system", 00:25:51.715 "dma_device_type": 1 00:25:51.715 } 00:25:51.715 ], 00:25:51.715 "driver_specific": { 00:25:51.715 "nvme": [ 00:25:51.715 { 00:25:51.715 "trid": { 00:25:51.715 "trtype": "TCP", 00:25:51.715 "adrfam": "IPv4", 00:25:51.715 "traddr": "10.0.0.2", 00:25:51.715 "trsvcid": "4421", 00:25:51.715 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:51.715 }, 00:25:51.715 "ctrlr_data": { 00:25:51.715 "cntlid": 3, 00:25:51.715 "vendor_id": "0x8086", 00:25:51.715 "model_number": "SPDK bdev Controller", 00:25:51.715 "serial_number": "00000000000000000000", 00:25:51.715 "firmware_revision": "25.01", 00:25:51.715 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:51.715 "oacs": { 00:25:51.715 "security": 0, 00:25:51.715 "format": 0, 00:25:51.715 "firmware": 0, 00:25:51.715 "ns_manage": 0 00:25:51.715 }, 00:25:51.715 "multi_ctrlr": true, 00:25:51.715 "ana_reporting": false 00:25:51.715 }, 00:25:51.715 "vs": { 00:25:51.715 "nvme_version": "1.3" 00:25:51.715 }, 00:25:51.715 "ns_data": { 00:25:51.715 "id": 1, 00:25:51.715 "can_share": true 00:25:51.715 } 00:25:51.715 } 00:25:51.715 ], 00:25:51.715 "mp_policy": "active_passive" 00:25:51.715 } 00:25:51.715 } 00:25:51.715 ] 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.6YGzjDQQaj 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:51.715 13:57:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:51.715 rmmod nvme_tcp 00:25:51.715 rmmod nvme_fabrics 00:25:51.715 rmmod nvme_keyring 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2302500 ']' 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2302500 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2302500 ']' 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2302500 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2302500 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2302500' 00:25:51.715 killing process with pid 2302500 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2302500 00:25:51.715 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2302500 00:25:51.995 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:51.995 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:51.995 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:51.995 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:51.995 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:25:51.995 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:25:51.995 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:51.995 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:51.995 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:51.995 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.995 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.995 13:57:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.900 13:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:53.900 00:25:53.900 real 0m5.703s 00:25:53.900 user 0m2.159s 00:25:53.900 sys 0m1.955s 00:25:53.900 13:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.900 13:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.900 ************************************ 00:25:53.901 END TEST nvmf_async_init 00:25:53.901 ************************************ 00:25:53.901 13:57:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:53.901 13:57:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:53.901 13:57:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.901 13:57:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.901 ************************************ 00:25:53.901 START TEST dma 00:25:53.901 ************************************ 00:25:53.901 13:57:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:53.901 * Looking for test storage... 00:25:53.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:53.901 13:57:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:53.901 13:57:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:25:53.901 13:57:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:54.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.159 --rc genhtml_branch_coverage=1 00:25:54.159 --rc genhtml_function_coverage=1 00:25:54.159 --rc genhtml_legend=1 00:25:54.159 --rc geninfo_all_blocks=1 00:25:54.159 --rc geninfo_unexecuted_blocks=1 00:25:54.159 00:25:54.159 ' 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:54.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.159 --rc genhtml_branch_coverage=1 00:25:54.159 --rc genhtml_function_coverage=1 00:25:54.159 --rc genhtml_legend=1 00:25:54.159 --rc geninfo_all_blocks=1 00:25:54.159 --rc geninfo_unexecuted_blocks=1 00:25:54.159 00:25:54.159 ' 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:54.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.159 --rc genhtml_branch_coverage=1 00:25:54.159 --rc genhtml_function_coverage=1 00:25:54.159 --rc genhtml_legend=1 00:25:54.159 --rc geninfo_all_blocks=1 00:25:54.159 --rc geninfo_unexecuted_blocks=1 00:25:54.159 00:25:54.159 ' 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:54.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.159 --rc genhtml_branch_coverage=1 00:25:54.159 --rc genhtml_function_coverage=1 00:25:54.159 --rc genhtml_legend=1 00:25:54.159 --rc geninfo_all_blocks=1 00:25:54.159 --rc geninfo_unexecuted_blocks=1 00:25:54.159 00:25:54.159 ' 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:54.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:54.159 13:57:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:54.159 00:25:54.160 real 0m0.167s 00:25:54.160 user 0m0.113s 00:25:54.160 sys 0m0.063s 00:25:54.160 13:57:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:54.160 13:57:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:54.160 ************************************ 00:25:54.160 END TEST dma 00:25:54.160 ************************************ 00:25:54.160 13:57:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:54.160 13:57:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:54.160 13:57:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:54.160 13:57:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.160 ************************************ 00:25:54.160 START TEST nvmf_identify 00:25:54.160 ************************************ 00:25:54.160 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:54.160 * Looking for test storage... 00:25:54.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:54.160 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:54.160 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:25:54.160 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:54.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.419 --rc genhtml_branch_coverage=1 00:25:54.419 --rc genhtml_function_coverage=1 00:25:54.419 --rc genhtml_legend=1 00:25:54.419 --rc geninfo_all_blocks=1 00:25:54.419 --rc geninfo_unexecuted_blocks=1 00:25:54.419 00:25:54.419 ' 00:25:54.419 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:54.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.420 --rc genhtml_branch_coverage=1 00:25:54.420 --rc genhtml_function_coverage=1 00:25:54.420 --rc genhtml_legend=1 00:25:54.420 --rc geninfo_all_blocks=1 00:25:54.420 --rc geninfo_unexecuted_blocks=1 00:25:54.420 00:25:54.420 ' 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:54.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.420 --rc genhtml_branch_coverage=1 00:25:54.420 --rc genhtml_function_coverage=1 00:25:54.420 --rc genhtml_legend=1 00:25:54.420 --rc geninfo_all_blocks=1 00:25:54.420 --rc geninfo_unexecuted_blocks=1 00:25:54.420 00:25:54.420 ' 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:54.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.420 --rc genhtml_branch_coverage=1 00:25:54.420 --rc genhtml_function_coverage=1 00:25:54.420 --rc genhtml_legend=1 00:25:54.420 --rc geninfo_all_blocks=1 00:25:54.420 --rc geninfo_unexecuted_blocks=1 00:25:54.420 00:25:54.420 ' 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:54.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:25:54.420 13:57:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:56.325 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:56.325 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:56.325 Found net devices under 0000:09:00.0: cvl_0_0 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:56.325 Found net devices under 0000:09:00.1: cvl_0_1 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:56.325 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:56.326 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:56.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:25:56.584 00:25:56.584 --- 10.0.0.2 ping statistics --- 00:25:56.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.584 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:56.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:25:56.584 00:25:56.584 --- 10.0.0.1 ping statistics --- 00:25:56.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.584 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:56.584 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:56.585 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:56.585 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.585 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2304677 00:25:56.585 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:56.585 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:56.585 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2304677 00:25:56.585 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2304677 ']' 00:25:56.585 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.585 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.585 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.585 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.585 13:57:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.585 [2024-12-05 13:57:28.045544] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:56.585 [2024-12-05 13:57:28.045635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.845 [2024-12-05 13:57:28.117728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:56.845 [2024-12-05 13:57:28.174620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.845 [2024-12-05 13:57:28.174672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.845 [2024-12-05 13:57:28.174700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.845 [2024-12-05 13:57:28.174711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.845 [2024-12-05 13:57:28.174720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.845 [2024-12-05 13:57:28.176596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.845 [2024-12-05 13:57:28.176657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:56.845 [2024-12-05 13:57:28.176707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:56.845 [2024-12-05 13:57:28.176711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.845 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:56.845 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:56.845 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:56.845 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.845 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.845 [2024-12-05 13:57:28.296265] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.845 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.846 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:56.846 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:56.846 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.846 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:56.846 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.846 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.846 Malloc0 00:25:56.846 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.846 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:56.846 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.846 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.846 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.846 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:56.846 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.846 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:57.108 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.108 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.108 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.108 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:57.108 [2024-12-05 13:57:28.375253] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.108 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.108 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:57.108 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.108 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:57.108 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.108 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:57.108 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.108 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:57.108 [ 00:25:57.108 { 00:25:57.108 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:57.108 "subtype": "Discovery", 00:25:57.108 "listen_addresses": [ 00:25:57.108 { 00:25:57.108 "trtype": "TCP", 00:25:57.108 "adrfam": "IPv4", 00:25:57.108 "traddr": "10.0.0.2", 00:25:57.108 "trsvcid": "4420" 00:25:57.108 } 00:25:57.108 ], 00:25:57.108 "allow_any_host": true, 00:25:57.108 "hosts": [] 00:25:57.108 }, 00:25:57.108 { 00:25:57.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:57.108 "subtype": "NVMe", 00:25:57.108 "listen_addresses": [ 00:25:57.108 { 00:25:57.108 "trtype": "TCP", 00:25:57.108 "adrfam": "IPv4", 00:25:57.108 "traddr": "10.0.0.2", 00:25:57.108 "trsvcid": "4420" 00:25:57.108 } 00:25:57.108 ], 00:25:57.108 "allow_any_host": true, 00:25:57.108 "hosts": [], 00:25:57.108 "serial_number": "SPDK00000000000001", 00:25:57.108 "model_number": "SPDK bdev Controller", 00:25:57.108 "max_namespaces": 32, 00:25:57.108 "min_cntlid": 1, 00:25:57.108 "max_cntlid": 65519, 00:25:57.108 "namespaces": [ 00:25:57.108 { 00:25:57.108 "nsid": 1, 00:25:57.108 "bdev_name": "Malloc0", 00:25:57.108 "name": "Malloc0", 00:25:57.108 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:57.108 "eui64": "ABCDEF0123456789", 00:25:57.108 "uuid": "d15444f2-ee97-4dc4-a932-d26c26519dd3" 00:25:57.108 } 00:25:57.108 ] 00:25:57.108 } 00:25:57.108 ] 00:25:57.108 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.108 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:57.108 [2024-12-05 13:57:28.413794] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:57.108 [2024-12-05 13:57:28.413831] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304703 ] 00:25:57.108 [2024-12-05 13:57:28.463656] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:57.108 [2024-12-05 13:57:28.463735] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:57.108 [2024-12-05 13:57:28.463746] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:57.108 [2024-12-05 13:57:28.463765] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:57.108 [2024-12-05 13:57:28.463778] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:57.108 [2024-12-05 13:57:28.467834] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:57.108 [2024-12-05 13:57:28.467899] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc08690 0 00:25:57.108 [2024-12-05 13:57:28.475429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:57.108 [2024-12-05 13:57:28.475454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:57.108 [2024-12-05 13:57:28.475463] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:57.108 [2024-12-05 13:57:28.475470] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:57.108 [2024-12-05 13:57:28.475519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.108 [2024-12-05 13:57:28.475533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.108 [2024-12-05 13:57:28.475541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc08690) 00:25:57.108 [2024-12-05 13:57:28.475559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:57.108 [2024-12-05 13:57:28.475586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a100, cid 0, qid 0 00:25:57.108 [2024-12-05 13:57:28.483432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.108 [2024-12-05 13:57:28.483451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.108 [2024-12-05 13:57:28.483459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.108 [2024-12-05 13:57:28.483482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a100) on tqpair=0xc08690 00:25:57.108 [2024-12-05 13:57:28.483504] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:57.108 [2024-12-05 13:57:28.483519] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:57.108 [2024-12-05 13:57:28.483529] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:57.108 [2024-12-05 13:57:28.483555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.108 [2024-12-05 13:57:28.483566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.108 [2024-12-05 13:57:28.483577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc08690) 00:25:57.108 [2024-12-05 13:57:28.483589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.108 [2024-12-05 13:57:28.483615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a100, cid 0, qid 0 00:25:57.108 [2024-12-05 13:57:28.483755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.108 [2024-12-05 13:57:28.483771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.108 [2024-12-05 13:57:28.483778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.108 [2024-12-05 13:57:28.483785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a100) on tqpair=0xc08690 00:25:57.108 [2024-12-05 13:57:28.483802] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:57.108 [2024-12-05 13:57:28.483819] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:57.108 [2024-12-05 13:57:28.483832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.108 [2024-12-05 13:57:28.483839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.108 [2024-12-05 13:57:28.483849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc08690) 00:25:57.108 [2024-12-05 13:57:28.483861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.108 [2024-12-05 13:57:28.483884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a100, cid 0, qid 0 00:25:57.108 [2024-12-05 13:57:28.483962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.108 [2024-12-05 13:57:28.483977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.108 [2024-12-05 13:57:28.483984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.108 [2024-12-05 13:57:28.483991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a100) on tqpair=0xc08690 00:25:57.108 [2024-12-05 13:57:28.484001] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:57.108 [2024-12-05 13:57:28.484018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:57.108 [2024-12-05 13:57:28.484031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.108 [2024-12-05 13:57:28.484039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.108 [2024-12-05 13:57:28.484046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc08690) 00:25:57.109 [2024-12-05 13:57:28.484059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-12-05 13:57:28.484083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a100, cid 0, qid 0 00:25:57.109 [2024-12-05 13:57:28.484155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.109 [2024-12-05 13:57:28.484170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.109 [2024-12-05 13:57:28.484177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.484184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a100) on tqpair=0xc08690 00:25:57.109 [2024-12-05 13:57:28.484193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:57.109 [2024-12-05 13:57:28.484214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.484224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.484231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc08690) 00:25:57.109 [2024-12-05 13:57:28.484241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-12-05 13:57:28.484274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a100, cid 0, qid 0 00:25:57.109 [2024-12-05 13:57:28.484349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.109 [2024-12-05 13:57:28.484364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.109 [2024-12-05 13:57:28.484371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.484377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a100) on tqpair=0xc08690 00:25:57.109 [2024-12-05 13:57:28.484386] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:57.109 [2024-12-05 13:57:28.484398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:57.109 [2024-12-05 13:57:28.484412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:57.109 [2024-12-05 13:57:28.484532] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:57.109 [2024-12-05 13:57:28.484543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:57.109 [2024-12-05 13:57:28.484558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.484565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.484572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc08690) 00:25:57.109 [2024-12-05 13:57:28.484582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-12-05 13:57:28.484605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a100, cid 0, qid 0 00:25:57.109 [2024-12-05 13:57:28.484730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.109 [2024-12-05 13:57:28.484745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.109 [2024-12-05 13:57:28.484752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.484759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a100) on tqpair=0xc08690 00:25:57.109 [2024-12-05 13:57:28.484767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:57.109 [2024-12-05 13:57:28.484786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.484796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.484803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc08690) 00:25:57.109 [2024-12-05 13:57:28.484814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-12-05 13:57:28.484836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a100, cid 0, qid 0 00:25:57.109 [2024-12-05 13:57:28.484923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.109 [2024-12-05 13:57:28.484938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.109 [2024-12-05 13:57:28.484945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.484952] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a100) on tqpair=0xc08690 00:25:57.109 [2024-12-05 13:57:28.484960] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:57.109 [2024-12-05 13:57:28.484971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:57.109 [2024-12-05 13:57:28.484986] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:57.109 [2024-12-05 13:57:28.485004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:57.109 [2024-12-05 13:57:28.485022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc08690) 00:25:57.109 [2024-12-05 13:57:28.485043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-12-05 13:57:28.485065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a100, cid 0, qid 0 00:25:57.109 [2024-12-05 13:57:28.485190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:57.109 [2024-12-05 13:57:28.485205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:57.109 [2024-12-05 13:57:28.485212] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485219] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc08690): datao=0, datal=4096, cccid=0 00:25:57.109 [2024-12-05 13:57:28.485230] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6a100) on tqpair(0xc08690): expected_datao=0, payload_size=4096 00:25:57.109 [2024-12-05 13:57:28.485243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485255] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485264] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.109 [2024-12-05 13:57:28.485287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.109 [2024-12-05 13:57:28.485294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a100) on tqpair=0xc08690 00:25:57.109 [2024-12-05 13:57:28.485313] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:57.109 [2024-12-05 13:57:28.485322] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:57.109 [2024-12-05 13:57:28.485329] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:57.109 [2024-12-05 13:57:28.485338] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:57.109 [2024-12-05 13:57:28.485346] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:57.109 [2024-12-05 13:57:28.485354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:57.109 [2024-12-05 13:57:28.485369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:57.109 [2024-12-05 13:57:28.485384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc08690) 00:25:57.109 [2024-12-05 13:57:28.485410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:57.109 [2024-12-05 13:57:28.485440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a100, cid 0, qid 0 00:25:57.109 [2024-12-05 13:57:28.485539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.109 [2024-12-05 13:57:28.485554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.109 [2024-12-05 13:57:28.485561] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a100) on tqpair=0xc08690 00:25:57.109 [2024-12-05 13:57:28.485585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc08690) 00:25:57.109 [2024-12-05 13:57:28.485613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.109 [2024-12-05 13:57:28.485623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc08690) 00:25:57.109 [2024-12-05 13:57:28.485645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.109 [2024-12-05 13:57:28.485655] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc08690) 00:25:57.109 [2024-12-05 13:57:28.485676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.109 [2024-12-05 13:57:28.485686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.109 [2024-12-05 13:57:28.485708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.109 [2024-12-05 13:57:28.485717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:57.109 [2024-12-05 13:57:28.485753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:57.109 [2024-12-05 13:57:28.485777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.109 [2024-12-05 13:57:28.485784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc08690) 00:25:57.109 [2024-12-05 13:57:28.485794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-12-05 13:57:28.485816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a100, cid 0, qid 0 00:25:57.110 [2024-12-05 13:57:28.485842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a280, cid 1, qid 0 00:25:57.110 [2024-12-05 13:57:28.485849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a400, cid 2, qid 0 00:25:57.110 [2024-12-05 13:57:28.485857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.110 [2024-12-05 13:57:28.485864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a700, cid 4, qid 0 00:25:57.110 [2024-12-05 13:57:28.486057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.110 [2024-12-05 13:57:28.486072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.110 [2024-12-05 13:57:28.486079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.486086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a700) on tqpair=0xc08690 00:25:57.110 [2024-12-05 13:57:28.486095] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:57.110 [2024-12-05 13:57:28.486107] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:57.110 [2024-12-05 13:57:28.486126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.486139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc08690) 00:25:57.110 [2024-12-05 13:57:28.486153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-12-05 13:57:28.486176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a700, cid 4, qid 0 00:25:57.110 [2024-12-05 13:57:28.486307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:57.110 [2024-12-05 13:57:28.486325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:57.110 [2024-12-05 13:57:28.486333] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.486339] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc08690): datao=0, datal=4096, cccid=4 00:25:57.110 [2024-12-05 13:57:28.486346] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6a700) on tqpair(0xc08690): expected_datao=0, payload_size=4096 00:25:57.110 [2024-12-05 13:57:28.486354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.486371] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.486380] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.531430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.110 [2024-12-05 13:57:28.531450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.110 [2024-12-05 13:57:28.531458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.531465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a700) on tqpair=0xc08690 00:25:57.110 [2024-12-05 13:57:28.531486] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:57.110 [2024-12-05 13:57:28.531525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.531536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc08690) 00:25:57.110 [2024-12-05 13:57:28.531548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-12-05 13:57:28.531560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.531567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.531573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc08690) 00:25:57.110 [2024-12-05 13:57:28.531582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.110 [2024-12-05 13:57:28.531610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a700, cid 4, qid 0 00:25:57.110 [2024-12-05 13:57:28.531637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a880, cid 5, qid 0 00:25:57.110 [2024-12-05 13:57:28.531792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:57.110 [2024-12-05 13:57:28.531810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:57.110 [2024-12-05 13:57:28.531818] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.531825] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc08690): datao=0, datal=1024, cccid=4 00:25:57.110 [2024-12-05 13:57:28.531833] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6a700) on tqpair(0xc08690): expected_datao=0, payload_size=1024 00:25:57.110 [2024-12-05 13:57:28.531840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.531850] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.531857] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.531866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.110 [2024-12-05 13:57:28.531875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.110 [2024-12-05 13:57:28.531882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.531893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a880) on tqpair=0xc08690 00:25:57.110 [2024-12-05 13:57:28.573567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.110 [2024-12-05 13:57:28.573587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.110 [2024-12-05 13:57:28.573595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.573602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a700) on tqpair=0xc08690 00:25:57.110 [2024-12-05 13:57:28.573620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.573629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc08690) 00:25:57.110 [2024-12-05 13:57:28.573640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-12-05 13:57:28.573673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a700, cid 4, qid 0 00:25:57.110 [2024-12-05 13:57:28.573775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:57.110 [2024-12-05 13:57:28.573790] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:57.110 [2024-12-05 13:57:28.573797] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.573804] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc08690): datao=0, datal=3072, cccid=4 00:25:57.110 [2024-12-05 13:57:28.573815] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6a700) on tqpair(0xc08690): expected_datao=0, payload_size=3072 00:25:57.110 [2024-12-05 13:57:28.573823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.573834] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.573842] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.573871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.110 [2024-12-05 13:57:28.573885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.110 [2024-12-05 13:57:28.573892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.573898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a700) on tqpair=0xc08690 00:25:57.110 [2024-12-05 13:57:28.573916] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.573926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc08690) 00:25:57.110 [2024-12-05 13:57:28.573937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-12-05 13:57:28.573968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a700, cid 4, qid 0 00:25:57.110 [2024-12-05 13:57:28.574073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:57.110 [2024-12-05 13:57:28.574088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:57.110 [2024-12-05 13:57:28.574095] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.574102] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc08690): datao=0, datal=8, cccid=4 00:25:57.110 [2024-12-05 13:57:28.574109] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6a700) on tqpair(0xc08690): expected_datao=0, payload_size=8 00:25:57.110 [2024-12-05 13:57:28.574120] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.574131] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.574138] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.619440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.110 [2024-12-05 13:57:28.619459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.110 [2024-12-05 13:57:28.619482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.110 [2024-12-05 13:57:28.619489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a700) on tqpair=0xc08690 00:25:57.110 ===================================================== 00:25:57.110 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:57.110 ===================================================== 00:25:57.110 Controller Capabilities/Features 00:25:57.110 ================================ 00:25:57.110 Vendor ID: 0000 00:25:57.110 Subsystem Vendor ID: 0000 00:25:57.110 Serial Number: .................... 00:25:57.110 Model Number: ........................................ 00:25:57.110 Firmware Version: 25.01 00:25:57.110 Recommended Arb Burst: 0 00:25:57.110 IEEE OUI Identifier: 00 00 00 00:25:57.110 Multi-path I/O 00:25:57.110 May have multiple subsystem ports: No 00:25:57.110 May have multiple controllers: No 00:25:57.110 Associated with SR-IOV VF: No 00:25:57.110 Max Data Transfer Size: 131072 00:25:57.110 Max Number of Namespaces: 0 00:25:57.110 Max Number of I/O Queues: 1024 00:25:57.110 NVMe Specification Version (VS): 1.3 00:25:57.110 NVMe Specification Version (Identify): 1.3 00:25:57.110 Maximum Queue Entries: 128 00:25:57.110 Contiguous Queues Required: Yes 00:25:57.110 Arbitration Mechanisms Supported 00:25:57.110 Weighted Round Robin: Not Supported 00:25:57.110 Vendor Specific: Not Supported 00:25:57.110 Reset Timeout: 15000 ms 00:25:57.110 Doorbell Stride: 4 bytes 00:25:57.110 NVM Subsystem Reset: Not Supported 00:25:57.110 Command Sets Supported 00:25:57.110 NVM Command Set: Supported 00:25:57.110 Boot Partition: Not Supported 00:25:57.110 Memory Page Size Minimum: 4096 bytes 00:25:57.110 Memory Page Size Maximum: 4096 bytes 00:25:57.110 Persistent Memory Region: Not Supported 00:25:57.110 Optional Asynchronous Events Supported 00:25:57.110 Namespace Attribute Notices: Not Supported 00:25:57.110 Firmware Activation Notices: Not Supported 00:25:57.111 ANA Change Notices: Not Supported 00:25:57.111 PLE Aggregate Log Change Notices: Not Supported 00:25:57.111 LBA Status Info Alert Notices: Not Supported 00:25:57.111 EGE Aggregate Log Change Notices: Not Supported 00:25:57.111 Normal NVM Subsystem Shutdown event: Not Supported 00:25:57.111 Zone Descriptor Change Notices: Not Supported 00:25:57.111 Discovery Log Change Notices: Supported 00:25:57.111 Controller Attributes 00:25:57.111 128-bit Host Identifier: Not Supported 00:25:57.111 Non-Operational Permissive Mode: Not Supported 00:25:57.111 NVM Sets: Not Supported 00:25:57.111 Read Recovery Levels: Not Supported 00:25:57.111 Endurance Groups: Not Supported 00:25:57.111 Predictable Latency Mode: Not Supported 00:25:57.111 Traffic Based Keep ALive: Not Supported 00:25:57.111 Namespace Granularity: Not Supported 00:25:57.111 SQ Associations: Not Supported 00:25:57.111 UUID List: Not Supported 00:25:57.111 Multi-Domain Subsystem: Not Supported 00:25:57.111 Fixed Capacity Management: Not Supported 00:25:57.111 Variable Capacity Management: Not Supported 00:25:57.111 Delete Endurance Group: Not Supported 00:25:57.111 Delete NVM Set: Not Supported 00:25:57.111 Extended LBA Formats Supported: Not Supported 00:25:57.111 Flexible Data Placement Supported: Not Supported 00:25:57.111 00:25:57.111 Controller Memory Buffer Support 00:25:57.111 ================================ 00:25:57.111 Supported: No 00:25:57.111 00:25:57.111 Persistent Memory Region Support 00:25:57.111 ================================ 00:25:57.111 Supported: No 00:25:57.111 00:25:57.111 Admin Command Set Attributes 00:25:57.111 ============================ 00:25:57.111 Security Send/Receive: Not Supported 00:25:57.111 Format NVM: Not Supported 00:25:57.111 Firmware Activate/Download: Not Supported 00:25:57.111 Namespace Management: Not Supported 00:25:57.111 Device Self-Test: Not Supported 00:25:57.111 Directives: Not Supported 00:25:57.111 NVMe-MI: Not Supported 00:25:57.111 Virtualization Management: Not Supported 00:25:57.111 Doorbell Buffer Config: Not Supported 00:25:57.111 Get LBA Status Capability: Not Supported 00:25:57.111 Command & Feature Lockdown Capability: Not Supported 00:25:57.111 Abort Command Limit: 1 00:25:57.111 Async Event Request Limit: 4 00:25:57.111 Number of Firmware Slots: N/A 00:25:57.111 Firmware Slot 1 Read-Only: N/A 00:25:57.111 Firmware Activation Without Reset: N/A 00:25:57.111 Multiple Update Detection Support: N/A 00:25:57.111 Firmware Update Granularity: No Information Provided 00:25:57.111 Per-Namespace SMART Log: No 00:25:57.111 Asymmetric Namespace Access Log Page: Not Supported 00:25:57.111 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:57.111 Command Effects Log Page: Not Supported 00:25:57.111 Get Log Page Extended Data: Supported 00:25:57.111 Telemetry Log Pages: Not Supported 00:25:57.111 Persistent Event Log Pages: Not Supported 00:25:57.111 Supported Log Pages Log Page: May Support 00:25:57.111 Commands Supported & Effects Log Page: Not Supported 00:25:57.111 Feature Identifiers & Effects Log Page:May Support 00:25:57.111 NVMe-MI Commands & Effects Log Page: May Support 00:25:57.111 Data Area 4 for Telemetry Log: Not Supported 00:25:57.111 Error Log Page Entries Supported: 128 00:25:57.111 Keep Alive: Not Supported 00:25:57.111 00:25:57.111 NVM Command Set Attributes 00:25:57.111 ========================== 00:25:57.111 Submission Queue Entry Size 00:25:57.111 Max: 1 00:25:57.111 Min: 1 00:25:57.111 Completion Queue Entry Size 00:25:57.111 Max: 1 00:25:57.111 Min: 1 00:25:57.111 Number of Namespaces: 0 00:25:57.111 Compare Command: Not Supported 00:25:57.111 Write Uncorrectable Command: Not Supported 00:25:57.111 Dataset Management Command: Not Supported 00:25:57.111 Write Zeroes Command: Not Supported 00:25:57.111 Set Features Save Field: Not Supported 00:25:57.111 Reservations: Not Supported 00:25:57.111 Timestamp: Not Supported 00:25:57.111 Copy: Not Supported 00:25:57.111 Volatile Write Cache: Not Present 00:25:57.111 Atomic Write Unit (Normal): 1 00:25:57.111 Atomic Write Unit (PFail): 1 00:25:57.111 Atomic Compare & Write Unit: 1 00:25:57.111 Fused Compare & Write: Supported 00:25:57.111 Scatter-Gather List 00:25:57.111 SGL Command Set: Supported 00:25:57.111 SGL Keyed: Supported 00:25:57.111 SGL Bit Bucket Descriptor: Not Supported 00:25:57.111 SGL Metadata Pointer: Not Supported 00:25:57.111 Oversized SGL: Not Supported 00:25:57.111 SGL Metadata Address: Not Supported 00:25:57.111 SGL Offset: Supported 00:25:57.111 Transport SGL Data Block: Not Supported 00:25:57.111 Replay Protected Memory Block: Not Supported 00:25:57.111 00:25:57.111 Firmware Slot Information 00:25:57.111 ========================= 00:25:57.111 Active slot: 0 00:25:57.111 00:25:57.111 00:25:57.111 Error Log 00:25:57.111 ========= 00:25:57.111 00:25:57.111 Active Namespaces 00:25:57.111 ================= 00:25:57.111 Discovery Log Page 00:25:57.111 ================== 00:25:57.111 Generation Counter: 2 00:25:57.111 Number of Records: 2 00:25:57.111 Record Format: 0 00:25:57.111 00:25:57.111 Discovery Log Entry 0 00:25:57.111 ---------------------- 00:25:57.111 Transport Type: 3 (TCP) 00:25:57.111 Address Family: 1 (IPv4) 00:25:57.111 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:57.111 Entry Flags: 00:25:57.111 Duplicate Returned Information: 1 00:25:57.111 Explicit Persistent Connection Support for Discovery: 1 00:25:57.111 Transport Requirements: 00:25:57.111 Secure Channel: Not Required 00:25:57.111 Port ID: 0 (0x0000) 00:25:57.111 Controller ID: 65535 (0xffff) 00:25:57.111 Admin Max SQ Size: 128 00:25:57.111 Transport Service Identifier: 4420 00:25:57.111 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:57.111 Transport Address: 10.0.0.2 00:25:57.111 Discovery Log Entry 1 00:25:57.111 ---------------------- 00:25:57.111 Transport Type: 3 (TCP) 00:25:57.111 Address Family: 1 (IPv4) 00:25:57.111 Subsystem Type: 2 (NVM Subsystem) 00:25:57.111 Entry Flags: 00:25:57.111 Duplicate Returned Information: 0 00:25:57.111 Explicit Persistent Connection Support for Discovery: 0 00:25:57.111 Transport Requirements: 00:25:57.111 Secure Channel: Not Required 00:25:57.111 Port ID: 0 (0x0000) 00:25:57.111 Controller ID: 65535 (0xffff) 00:25:57.111 Admin Max SQ Size: 128 00:25:57.111 Transport Service Identifier: 4420 00:25:57.111 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:57.111 Transport Address: 10.0.0.2 [2024-12-05 13:57:28.619608] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:57.111 [2024-12-05 13:57:28.619631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a100) on tqpair=0xc08690 00:25:57.111 [2024-12-05 13:57:28.619646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.111 [2024-12-05 13:57:28.619655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a280) on tqpair=0xc08690 00:25:57.111 [2024-12-05 13:57:28.619663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.111 [2024-12-05 13:57:28.619671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a400) on tqpair=0xc08690 00:25:57.111 [2024-12-05 13:57:28.619679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.111 [2024-12-05 13:57:28.619687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.111 [2024-12-05 13:57:28.619695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.111 [2024-12-05 13:57:28.619708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.111 [2024-12-05 13:57:28.619716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.111 [2024-12-05 13:57:28.619723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.111 [2024-12-05 13:57:28.619734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.111 [2024-12-05 13:57:28.619774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.111 [2024-12-05 13:57:28.619930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.111 [2024-12-05 13:57:28.619947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.111 [2024-12-05 13:57:28.619956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.111 [2024-12-05 13:57:28.619963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.111 [2024-12-05 13:57:28.619975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.111 [2024-12-05 13:57:28.619983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.111 [2024-12-05 13:57:28.619990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.111 [2024-12-05 13:57:28.620000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.111 [2024-12-05 13:57:28.620030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.111 [2024-12-05 13:57:28.620134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.111 [2024-12-05 13:57:28.620149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.112 [2024-12-05 13:57:28.620156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.620163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.112 [2024-12-05 13:57:28.620172] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:57.112 [2024-12-05 13:57:28.620180] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:57.112 [2024-12-05 13:57:28.620198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.620213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.620221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.112 [2024-12-05 13:57:28.620232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-12-05 13:57:28.620260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.112 [2024-12-05 13:57:28.620391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.112 [2024-12-05 13:57:28.620406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.112 [2024-12-05 13:57:28.620413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.620431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.112 [2024-12-05 13:57:28.620452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.620463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.620469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.112 [2024-12-05 13:57:28.620480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-12-05 13:57:28.620502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.112 [2024-12-05 13:57:28.620636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.112 [2024-12-05 13:57:28.620651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.112 [2024-12-05 13:57:28.620658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.620665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.112 [2024-12-05 13:57:28.620683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.620694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.620701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.112 [2024-12-05 13:57:28.620712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-12-05 13:57:28.620733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.112 [2024-12-05 13:57:28.620808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.112 [2024-12-05 13:57:28.620823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.112 [2024-12-05 13:57:28.620830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.620836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.112 [2024-12-05 13:57:28.620854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.620865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.620872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.112 [2024-12-05 13:57:28.620882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-12-05 13:57:28.620904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.112 [2024-12-05 13:57:28.620983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.112 [2024-12-05 13:57:28.620998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.112 [2024-12-05 13:57:28.621005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.112 [2024-12-05 13:57:28.621029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621040] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621047] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.112 [2024-12-05 13:57:28.621057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-12-05 13:57:28.621079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.112 [2024-12-05 13:57:28.621163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.112 [2024-12-05 13:57:28.621178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.112 [2024-12-05 13:57:28.621185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.112 [2024-12-05 13:57:28.621210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.112 [2024-12-05 13:57:28.621238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-12-05 13:57:28.621260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.112 [2024-12-05 13:57:28.621340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.112 [2024-12-05 13:57:28.621354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.112 [2024-12-05 13:57:28.621361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.112 [2024-12-05 13:57:28.621386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.112 [2024-12-05 13:57:28.621414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-12-05 13:57:28.621447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.112 [2024-12-05 13:57:28.621523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.112 [2024-12-05 13:57:28.621538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.112 [2024-12-05 13:57:28.621545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.112 [2024-12-05 13:57:28.621570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.112 [2024-12-05 13:57:28.621598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-12-05 13:57:28.621620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.112 [2024-12-05 13:57:28.621698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.112 [2024-12-05 13:57:28.621713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.112 [2024-12-05 13:57:28.621720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.112 [2024-12-05 13:57:28.621745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.112 [2024-12-05 13:57:28.621773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-12-05 13:57:28.621796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.112 [2024-12-05 13:57:28.621901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.112 [2024-12-05 13:57:28.621919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.112 [2024-12-05 13:57:28.621927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.112 [2024-12-05 13:57:28.621953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.112 [2024-12-05 13:57:28.621970] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.113 [2024-12-05 13:57:28.621981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-12-05 13:57:28.622005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.113 [2024-12-05 13:57:28.622105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.113 [2024-12-05 13:57:28.622120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.113 [2024-12-05 13:57:28.622127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.113 [2024-12-05 13:57:28.622153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.113 [2024-12-05 13:57:28.622181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-12-05 13:57:28.622204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.113 [2024-12-05 13:57:28.622276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.113 [2024-12-05 13:57:28.622291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.113 [2024-12-05 13:57:28.622301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.113 [2024-12-05 13:57:28.622326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.113 [2024-12-05 13:57:28.622356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-12-05 13:57:28.622378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.113 [2024-12-05 13:57:28.622508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.113 [2024-12-05 13:57:28.622524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.113 [2024-12-05 13:57:28.622531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.113 [2024-12-05 13:57:28.622557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.113 [2024-12-05 13:57:28.622585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-12-05 13:57:28.622608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.113 [2024-12-05 13:57:28.622709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.113 [2024-12-05 13:57:28.622724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.113 [2024-12-05 13:57:28.622736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.113 [2024-12-05 13:57:28.622765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.113 [2024-12-05 13:57:28.622792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-12-05 13:57:28.622817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.113 [2024-12-05 13:57:28.622914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.113 [2024-12-05 13:57:28.622928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.113 [2024-12-05 13:57:28.622935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.113 [2024-12-05 13:57:28.622960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.622977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.113 [2024-12-05 13:57:28.622988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-12-05 13:57:28.623010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.113 [2024-12-05 13:57:28.623080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.113 [2024-12-05 13:57:28.623095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.113 [2024-12-05 13:57:28.623102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.623109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.113 [2024-12-05 13:57:28.623125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.623138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.623145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.113 [2024-12-05 13:57:28.623156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-12-05 13:57:28.623177] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.113 [2024-12-05 13:57:28.623256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.113 [2024-12-05 13:57:28.623270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.113 [2024-12-05 13:57:28.623277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.623284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.113 [2024-12-05 13:57:28.623302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.623313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.623319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.113 [2024-12-05 13:57:28.623330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-12-05 13:57:28.623351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.113 [2024-12-05 13:57:28.627445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.113 [2024-12-05 13:57:28.627462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.113 [2024-12-05 13:57:28.627469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.627476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.113 [2024-12-05 13:57:28.627499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.627510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.627517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc08690) 00:25:57.113 [2024-12-05 13:57:28.627527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-12-05 13:57:28.627550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6a580, cid 3, qid 0 00:25:57.113 [2024-12-05 13:57:28.627698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.113 [2024-12-05 13:57:28.627713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.113 [2024-12-05 13:57:28.627720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.113 [2024-12-05 13:57:28.627727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6a580) on tqpair=0xc08690 00:25:57.113 [2024-12-05 13:57:28.627741] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:25:57.375 00:25:57.375 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:57.375 [2024-12-05 13:57:28.665511] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:57.375 [2024-12-05 13:57:28.665561] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304708 ] 00:25:57.375 [2024-12-05 13:57:28.716038] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:57.375 [2024-12-05 13:57:28.716092] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:57.375 [2024-12-05 13:57:28.716102] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:57.375 [2024-12-05 13:57:28.716122] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:57.375 [2024-12-05 13:57:28.716135] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:57.375 [2024-12-05 13:57:28.716560] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:57.375 [2024-12-05 13:57:28.716601] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22af690 0 00:25:57.375 [2024-12-05 13:57:28.726653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:57.375 [2024-12-05 13:57:28.726672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:57.375 [2024-12-05 13:57:28.726680] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:57.375 [2024-12-05 13:57:28.726686] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:57.375 [2024-12-05 13:57:28.726727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.726739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.726746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22af690) 00:25:57.375 [2024-12-05 13:57:28.726760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:57.375 [2024-12-05 13:57:28.726786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311100, cid 0, qid 0 00:25:57.375 [2024-12-05 13:57:28.734432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.375 [2024-12-05 13:57:28.734453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.375 [2024-12-05 13:57:28.734462] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.734468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311100) on tqpair=0x22af690 00:25:57.375 [2024-12-05 13:57:28.734485] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:57.375 [2024-12-05 13:57:28.734497] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:57.375 [2024-12-05 13:57:28.734506] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:57.375 [2024-12-05 13:57:28.734525] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.734534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.734540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22af690) 00:25:57.375 [2024-12-05 13:57:28.734551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.375 [2024-12-05 13:57:28.734575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311100, cid 0, qid 0 00:25:57.375 [2024-12-05 13:57:28.734712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.375 [2024-12-05 13:57:28.734725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.375 [2024-12-05 13:57:28.734732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.734739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311100) on tqpair=0x22af690 00:25:57.375 [2024-12-05 13:57:28.734752] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:57.375 [2024-12-05 13:57:28.734767] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:57.375 [2024-12-05 13:57:28.734780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.734787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.734793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22af690) 00:25:57.375 [2024-12-05 13:57:28.734804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.375 [2024-12-05 13:57:28.734826] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311100, cid 0, qid 0 00:25:57.375 [2024-12-05 13:57:28.734910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.375 [2024-12-05 13:57:28.734924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.375 [2024-12-05 13:57:28.734931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.734937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311100) on tqpair=0x22af690 00:25:57.375 [2024-12-05 13:57:28.734946] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:57.375 [2024-12-05 13:57:28.734960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:57.375 [2024-12-05 13:57:28.734973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.734980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.734987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22af690) 00:25:57.375 [2024-12-05 13:57:28.734997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.375 [2024-12-05 13:57:28.735019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311100, cid 0, qid 0 00:25:57.375 [2024-12-05 13:57:28.735105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.375 [2024-12-05 13:57:28.735118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.375 [2024-12-05 13:57:28.735129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.735136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311100) on tqpair=0x22af690 00:25:57.375 [2024-12-05 13:57:28.735145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:57.375 [2024-12-05 13:57:28.735163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.735173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.735179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22af690) 00:25:57.375 [2024-12-05 13:57:28.735189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.375 [2024-12-05 13:57:28.735211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311100, cid 0, qid 0 00:25:57.375 [2024-12-05 13:57:28.735306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.375 [2024-12-05 13:57:28.735320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.375 [2024-12-05 13:57:28.735326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.735333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311100) on tqpair=0x22af690 00:25:57.375 [2024-12-05 13:57:28.735340] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:57.375 [2024-12-05 13:57:28.735349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:57.375 [2024-12-05 13:57:28.735362] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:57.375 [2024-12-05 13:57:28.735472] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:57.375 [2024-12-05 13:57:28.735483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:57.375 [2024-12-05 13:57:28.735495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.735503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.735509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22af690) 00:25:57.375 [2024-12-05 13:57:28.735519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.375 [2024-12-05 13:57:28.735541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311100, cid 0, qid 0 00:25:57.375 [2024-12-05 13:57:28.735737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.375 [2024-12-05 13:57:28.735751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.375 [2024-12-05 13:57:28.735758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.375 [2024-12-05 13:57:28.735765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311100) on tqpair=0x22af690 00:25:57.376 [2024-12-05 13:57:28.735773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:57.376 [2024-12-05 13:57:28.735790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.735799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.735805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22af690) 00:25:57.376 [2024-12-05 13:57:28.735816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.376 [2024-12-05 13:57:28.735837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311100, cid 0, qid 0 00:25:57.376 [2024-12-05 13:57:28.735923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.376 [2024-12-05 13:57:28.735941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.376 [2024-12-05 13:57:28.735948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.735955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311100) on tqpair=0x22af690 00:25:57.376 [2024-12-05 13:57:28.735963] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:57.376 [2024-12-05 13:57:28.735971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:57.376 [2024-12-05 13:57:28.735985] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:57.376 [2024-12-05 13:57:28.735999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:57.376 [2024-12-05 13:57:28.736014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22af690) 00:25:57.376 [2024-12-05 13:57:28.736032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.376 [2024-12-05 13:57:28.736054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311100, cid 0, qid 0 00:25:57.376 [2024-12-05 13:57:28.736190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:57.376 [2024-12-05 13:57:28.736203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:57.376 [2024-12-05 13:57:28.736210] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736216] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22af690): datao=0, datal=4096, cccid=0 00:25:57.376 [2024-12-05 13:57:28.736224] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2311100) on tqpair(0x22af690): expected_datao=0, payload_size=4096 00:25:57.376 [2024-12-05 13:57:28.736231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736241] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736249] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.376 [2024-12-05 13:57:28.736270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.376 [2024-12-05 13:57:28.736276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311100) on tqpair=0x22af690 00:25:57.376 [2024-12-05 13:57:28.736293] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:57.376 [2024-12-05 13:57:28.736301] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:57.376 [2024-12-05 13:57:28.736309] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:57.376 [2024-12-05 13:57:28.736316] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:57.376 [2024-12-05 13:57:28.736323] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:57.376 [2024-12-05 13:57:28.736331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:57.376 [2024-12-05 13:57:28.736345] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:57.376 [2024-12-05 13:57:28.736357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22af690) 00:25:57.376 [2024-12-05 13:57:28.736388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:57.376 [2024-12-05 13:57:28.736412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311100, cid 0, qid 0 00:25:57.376 [2024-12-05 13:57:28.736551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.376 [2024-12-05 13:57:28.736565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.376 [2024-12-05 13:57:28.736572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311100) on tqpair=0x22af690 00:25:57.376 [2024-12-05 13:57:28.736589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22af690) 00:25:57.376 [2024-12-05 13:57:28.736612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.376 [2024-12-05 13:57:28.736623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22af690) 00:25:57.376 [2024-12-05 13:57:28.736644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.376 [2024-12-05 13:57:28.736654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22af690) 00:25:57.376 [2024-12-05 13:57:28.736675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.376 [2024-12-05 13:57:28.736685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22af690) 00:25:57.376 [2024-12-05 13:57:28.736706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.376 [2024-12-05 13:57:28.736731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:57.376 [2024-12-05 13:57:28.736750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:57.376 [2024-12-05 13:57:28.736763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.736770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22af690) 00:25:57.376 [2024-12-05 13:57:28.736794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.376 [2024-12-05 13:57:28.736817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311100, cid 0, qid 0 00:25:57.376 [2024-12-05 13:57:28.736828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311280, cid 1, qid 0 00:25:57.376 [2024-12-05 13:57:28.736835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311400, cid 2, qid 0 00:25:57.376 [2024-12-05 13:57:28.736858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311580, cid 3, qid 0 00:25:57.376 [2024-12-05 13:57:28.736866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311700, cid 4, qid 0 00:25:57.376 [2024-12-05 13:57:28.737041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.376 [2024-12-05 13:57:28.737059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.376 [2024-12-05 13:57:28.737067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.737074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311700) on tqpair=0x22af690 00:25:57.376 [2024-12-05 13:57:28.737082] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:57.376 [2024-12-05 13:57:28.737090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:57.376 [2024-12-05 13:57:28.737109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:57.376 [2024-12-05 13:57:28.737122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:57.376 [2024-12-05 13:57:28.737133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.737141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.737147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22af690) 00:25:57.376 [2024-12-05 13:57:28.737172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:57.376 [2024-12-05 13:57:28.737194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311700, cid 4, qid 0 00:25:57.376 [2024-12-05 13:57:28.737373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.376 [2024-12-05 13:57:28.737387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.376 [2024-12-05 13:57:28.737394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.737401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311700) on tqpair=0x22af690 00:25:57.376 [2024-12-05 13:57:28.737479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:57.376 [2024-12-05 13:57:28.737502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:57.376 [2024-12-05 13:57:28.737517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.376 [2024-12-05 13:57:28.737524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22af690) 00:25:57.376 [2024-12-05 13:57:28.737535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.376 [2024-12-05 13:57:28.737557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311700, cid 4, qid 0 00:25:57.376 [2024-12-05 13:57:28.737692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:57.377 [2024-12-05 13:57:28.737707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:57.377 [2024-12-05 13:57:28.737713] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.737720] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22af690): datao=0, datal=4096, cccid=4 00:25:57.377 [2024-12-05 13:57:28.737727] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2311700) on tqpair(0x22af690): expected_datao=0, payload_size=4096 00:25:57.377 [2024-12-05 13:57:28.737734] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.737744] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.737751] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.737763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.377 [2024-12-05 13:57:28.737772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.377 [2024-12-05 13:57:28.737779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.737785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311700) on tqpair=0x22af690 00:25:57.377 [2024-12-05 13:57:28.737807] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:57.377 [2024-12-05 13:57:28.737824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:57.377 [2024-12-05 13:57:28.737843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:57.377 [2024-12-05 13:57:28.737856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.737864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22af690) 00:25:57.377 [2024-12-05 13:57:28.737874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.377 [2024-12-05 13:57:28.737896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311700, cid 4, qid 0 00:25:57.377 [2024-12-05 13:57:28.738007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:57.377 [2024-12-05 13:57:28.738021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:57.377 [2024-12-05 13:57:28.738028] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738034] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22af690): datao=0, datal=4096, cccid=4 00:25:57.377 [2024-12-05 13:57:28.738041] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2311700) on tqpair(0x22af690): expected_datao=0, payload_size=4096 00:25:57.377 [2024-12-05 13:57:28.738048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738065] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738074] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.377 [2024-12-05 13:57:28.738095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.377 [2024-12-05 13:57:28.738102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311700) on tqpair=0x22af690 00:25:57.377 [2024-12-05 13:57:28.738124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:57.377 [2024-12-05 13:57:28.738141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:57.377 [2024-12-05 13:57:28.738155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22af690) 00:25:57.377 [2024-12-05 13:57:28.738173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.377 [2024-12-05 13:57:28.738195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311700, cid 4, qid 0 00:25:57.377 [2024-12-05 13:57:28.738291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:57.377 [2024-12-05 13:57:28.738305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:57.377 [2024-12-05 13:57:28.738312] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738318] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22af690): datao=0, datal=4096, cccid=4 00:25:57.377 [2024-12-05 13:57:28.738325] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2311700) on tqpair(0x22af690): expected_datao=0, payload_size=4096 00:25:57.377 [2024-12-05 13:57:28.738332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738349] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738358] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.377 [2024-12-05 13:57:28.738394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.377 [2024-12-05 13:57:28.738401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311700) on tqpair=0x22af690 00:25:57.377 [2024-12-05 13:57:28.738433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:57.377 [2024-12-05 13:57:28.738453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:57.377 [2024-12-05 13:57:28.738467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:57.377 [2024-12-05 13:57:28.738479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:57.377 [2024-12-05 13:57:28.738488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:57.377 [2024-12-05 13:57:28.738497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:57.377 [2024-12-05 13:57:28.738505] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:57.377 [2024-12-05 13:57:28.738513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:57.377 [2024-12-05 13:57:28.738522] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:57.377 [2024-12-05 13:57:28.738540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22af690) 00:25:57.377 [2024-12-05 13:57:28.738560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.377 [2024-12-05 13:57:28.738571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22af690) 00:25:57.377 [2024-12-05 13:57:28.738593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.377 [2024-12-05 13:57:28.738619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311700, cid 4, qid 0 00:25:57.377 [2024-12-05 13:57:28.738631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311880, cid 5, qid 0 00:25:57.377 [2024-12-05 13:57:28.738764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.377 [2024-12-05 13:57:28.738778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.377 [2024-12-05 13:57:28.738785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311700) on tqpair=0x22af690 00:25:57.377 [2024-12-05 13:57:28.738801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.377 [2024-12-05 13:57:28.738810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.377 [2024-12-05 13:57:28.738817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311880) on tqpair=0x22af690 00:25:57.377 [2024-12-05 13:57:28.738839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.738847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22af690) 00:25:57.377 [2024-12-05 13:57:28.738858] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.377 [2024-12-05 13:57:28.738883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311880, cid 5, qid 0 00:25:57.377 [2024-12-05 13:57:28.739011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.377 [2024-12-05 13:57:28.739025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.377 [2024-12-05 13:57:28.739031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.739038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311880) on tqpair=0x22af690 00:25:57.377 [2024-12-05 13:57:28.739053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.739062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22af690) 00:25:57.377 [2024-12-05 13:57:28.739072] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.377 [2024-12-05 13:57:28.739093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311880, cid 5, qid 0 00:25:57.377 [2024-12-05 13:57:28.739219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.377 [2024-12-05 13:57:28.739232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.377 [2024-12-05 13:57:28.739239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.739246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311880) on tqpair=0x22af690 00:25:57.377 [2024-12-05 13:57:28.739261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.377 [2024-12-05 13:57:28.739270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22af690) 00:25:57.377 [2024-12-05 13:57:28.739280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.377 [2024-12-05 13:57:28.739301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311880, cid 5, qid 0 00:25:57.377 [2024-12-05 13:57:28.739434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.378 [2024-12-05 13:57:28.739447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.378 [2024-12-05 13:57:28.739454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.739460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311880) on tqpair=0x22af690 00:25:57.378 [2024-12-05 13:57:28.739484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.739495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22af690) 00:25:57.378 [2024-12-05 13:57:28.739505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.378 [2024-12-05 13:57:28.739518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.739525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22af690) 00:25:57.378 [2024-12-05 13:57:28.739534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.378 [2024-12-05 13:57:28.739546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.739553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x22af690) 00:25:57.378 [2024-12-05 13:57:28.739562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.378 [2024-12-05 13:57:28.739574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.739581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22af690) 00:25:57.378 [2024-12-05 13:57:28.739590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.378 [2024-12-05 13:57:28.739616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311880, cid 5, qid 0 00:25:57.378 [2024-12-05 13:57:28.739628] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311700, cid 4, qid 0 00:25:57.378 [2024-12-05 13:57:28.739635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311a00, cid 6, qid 0 00:25:57.378 [2024-12-05 13:57:28.739643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311b80, cid 7, qid 0 00:25:57.378 [2024-12-05 13:57:28.739845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:57.378 [2024-12-05 13:57:28.739859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:57.378 [2024-12-05 13:57:28.739866] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.739872] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22af690): datao=0, datal=8192, cccid=5 00:25:57.378 [2024-12-05 13:57:28.739879] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2311880) on tqpair(0x22af690): expected_datao=0, payload_size=8192 00:25:57.378 [2024-12-05 13:57:28.739887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.739908] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.739917] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.739926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:57.378 [2024-12-05 13:57:28.739935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:57.378 [2024-12-05 13:57:28.739941] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.739947] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22af690): datao=0, datal=512, cccid=4 00:25:57.378 [2024-12-05 13:57:28.739954] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2311700) on tqpair(0x22af690): expected_datao=0, payload_size=512 00:25:57.378 [2024-12-05 13:57:28.739961] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.739970] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.739977] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.739985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:57.378 [2024-12-05 13:57:28.739993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:57.378 [2024-12-05 13:57:28.740000] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.740006] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22af690): datao=0, datal=512, cccid=6 00:25:57.378 [2024-12-05 13:57:28.740013] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2311a00) on tqpair(0x22af690): expected_datao=0, payload_size=512 00:25:57.378 [2024-12-05 13:57:28.740020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.740029] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.740035] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.740044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:57.378 [2024-12-05 13:57:28.740052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:57.378 [2024-12-05 13:57:28.740058] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.740064] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22af690): datao=0, datal=4096, cccid=7 00:25:57.378 [2024-12-05 13:57:28.740072] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2311b80) on tqpair(0x22af690): expected_datao=0, payload_size=4096 00:25:57.378 [2024-12-05 13:57:28.740079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.740088] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.740095] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.740106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.378 [2024-12-05 13:57:28.740121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.378 [2024-12-05 13:57:28.740128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.740135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311880) on tqpair=0x22af690 00:25:57.378 [2024-12-05 13:57:28.740153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.378 [2024-12-05 13:57:28.740165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.378 [2024-12-05 13:57:28.740186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.740192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311700) on tqpair=0x22af690 00:25:57.378 [2024-12-05 13:57:28.740207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.378 [2024-12-05 13:57:28.740218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.378 [2024-12-05 13:57:28.740224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.740230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311a00) on tqpair=0x22af690 00:25:57.378 [2024-12-05 13:57:28.740255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.378 [2024-12-05 13:57:28.740264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.378 [2024-12-05 13:57:28.740270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.378 [2024-12-05 13:57:28.740276] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311b80) on tqpair=0x22af690 00:25:57.378 ===================================================== 00:25:57.378 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:57.378 ===================================================== 00:25:57.378 Controller Capabilities/Features 00:25:57.378 ================================ 00:25:57.378 Vendor ID: 8086 00:25:57.378 Subsystem Vendor ID: 8086 00:25:57.378 Serial Number: SPDK00000000000001 00:25:57.378 Model Number: SPDK bdev Controller 00:25:57.378 Firmware Version: 25.01 00:25:57.378 Recommended Arb Burst: 6 00:25:57.378 IEEE OUI Identifier: e4 d2 5c 00:25:57.378 Multi-path I/O 00:25:57.378 May have multiple subsystem ports: Yes 00:25:57.378 May have multiple controllers: Yes 00:25:57.378 Associated with SR-IOV VF: No 00:25:57.378 Max Data Transfer Size: 131072 00:25:57.378 Max Number of Namespaces: 32 00:25:57.378 Max Number of I/O Queues: 127 00:25:57.378 NVMe Specification Version (VS): 1.3 00:25:57.378 NVMe Specification Version (Identify): 1.3 00:25:57.378 Maximum Queue Entries: 128 00:25:57.378 Contiguous Queues Required: Yes 00:25:57.378 Arbitration Mechanisms Supported 00:25:57.378 Weighted Round Robin: Not Supported 00:25:57.378 Vendor Specific: Not Supported 00:25:57.378 Reset Timeout: 15000 ms 00:25:57.378 Doorbell Stride: 4 bytes 00:25:57.378 NVM Subsystem Reset: Not Supported 00:25:57.378 Command Sets Supported 00:25:57.378 NVM Command Set: Supported 00:25:57.378 Boot Partition: Not Supported 00:25:57.378 Memory Page Size Minimum: 4096 bytes 00:25:57.378 Memory Page Size Maximum: 4096 bytes 00:25:57.378 Persistent Memory Region: Not Supported 00:25:57.378 Optional Asynchronous Events Supported 00:25:57.378 Namespace Attribute Notices: Supported 00:25:57.378 Firmware Activation Notices: Not Supported 00:25:57.378 ANA Change Notices: Not Supported 00:25:57.378 PLE Aggregate Log Change Notices: Not Supported 00:25:57.378 LBA Status Info Alert Notices: Not Supported 00:25:57.378 EGE Aggregate Log Change Notices: Not Supported 00:25:57.378 Normal NVM Subsystem Shutdown event: Not Supported 00:25:57.378 Zone Descriptor Change Notices: Not Supported 00:25:57.378 Discovery Log Change Notices: Not Supported 00:25:57.378 Controller Attributes 00:25:57.378 128-bit Host Identifier: Supported 00:25:57.378 Non-Operational Permissive Mode: Not Supported 00:25:57.378 NVM Sets: Not Supported 00:25:57.378 Read Recovery Levels: Not Supported 00:25:57.378 Endurance Groups: Not Supported 00:25:57.378 Predictable Latency Mode: Not Supported 00:25:57.378 Traffic Based Keep ALive: Not Supported 00:25:57.378 Namespace Granularity: Not Supported 00:25:57.378 SQ Associations: Not Supported 00:25:57.378 UUID List: Not Supported 00:25:57.378 Multi-Domain Subsystem: Not Supported 00:25:57.378 Fixed Capacity Management: Not Supported 00:25:57.378 Variable Capacity Management: Not Supported 00:25:57.378 Delete Endurance Group: Not Supported 00:25:57.378 Delete NVM Set: Not Supported 00:25:57.378 Extended LBA Formats Supported: Not Supported 00:25:57.379 Flexible Data Placement Supported: Not Supported 00:25:57.379 00:25:57.379 Controller Memory Buffer Support 00:25:57.379 ================================ 00:25:57.379 Supported: No 00:25:57.379 00:25:57.379 Persistent Memory Region Support 00:25:57.379 ================================ 00:25:57.379 Supported: No 00:25:57.379 00:25:57.379 Admin Command Set Attributes 00:25:57.379 ============================ 00:25:57.379 Security Send/Receive: Not Supported 00:25:57.379 Format NVM: Not Supported 00:25:57.379 Firmware Activate/Download: Not Supported 00:25:57.379 Namespace Management: Not Supported 00:25:57.379 Device Self-Test: Not Supported 00:25:57.379 Directives: Not Supported 00:25:57.379 NVMe-MI: Not Supported 00:25:57.379 Virtualization Management: Not Supported 00:25:57.379 Doorbell Buffer Config: Not Supported 00:25:57.379 Get LBA Status Capability: Not Supported 00:25:57.379 Command & Feature Lockdown Capability: Not Supported 00:25:57.379 Abort Command Limit: 4 00:25:57.379 Async Event Request Limit: 4 00:25:57.379 Number of Firmware Slots: N/A 00:25:57.379 Firmware Slot 1 Read-Only: N/A 00:25:57.379 Firmware Activation Without Reset: N/A 00:25:57.379 Multiple Update Detection Support: N/A 00:25:57.379 Firmware Update Granularity: No Information Provided 00:25:57.379 Per-Namespace SMART Log: No 00:25:57.379 Asymmetric Namespace Access Log Page: Not Supported 00:25:57.379 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:57.379 Command Effects Log Page: Supported 00:25:57.379 Get Log Page Extended Data: Supported 00:25:57.379 Telemetry Log Pages: Not Supported 00:25:57.379 Persistent Event Log Pages: Not Supported 00:25:57.379 Supported Log Pages Log Page: May Support 00:25:57.379 Commands Supported & Effects Log Page: Not Supported 00:25:57.379 Feature Identifiers & Effects Log Page:May Support 00:25:57.379 NVMe-MI Commands & Effects Log Page: May Support 00:25:57.379 Data Area 4 for Telemetry Log: Not Supported 00:25:57.379 Error Log Page Entries Supported: 128 00:25:57.379 Keep Alive: Supported 00:25:57.379 Keep Alive Granularity: 10000 ms 00:25:57.379 00:25:57.379 NVM Command Set Attributes 00:25:57.379 ========================== 00:25:57.379 Submission Queue Entry Size 00:25:57.379 Max: 64 00:25:57.379 Min: 64 00:25:57.379 Completion Queue Entry Size 00:25:57.379 Max: 16 00:25:57.379 Min: 16 00:25:57.379 Number of Namespaces: 32 00:25:57.379 Compare Command: Supported 00:25:57.379 Write Uncorrectable Command: Not Supported 00:25:57.379 Dataset Management Command: Supported 00:25:57.379 Write Zeroes Command: Supported 00:25:57.379 Set Features Save Field: Not Supported 00:25:57.379 Reservations: Supported 00:25:57.379 Timestamp: Not Supported 00:25:57.379 Copy: Supported 00:25:57.379 Volatile Write Cache: Present 00:25:57.379 Atomic Write Unit (Normal): 1 00:25:57.379 Atomic Write Unit (PFail): 1 00:25:57.379 Atomic Compare & Write Unit: 1 00:25:57.379 Fused Compare & Write: Supported 00:25:57.379 Scatter-Gather List 00:25:57.379 SGL Command Set: Supported 00:25:57.379 SGL Keyed: Supported 00:25:57.379 SGL Bit Bucket Descriptor: Not Supported 00:25:57.379 SGL Metadata Pointer: Not Supported 00:25:57.379 Oversized SGL: Not Supported 00:25:57.379 SGL Metadata Address: Not Supported 00:25:57.379 SGL Offset: Supported 00:25:57.379 Transport SGL Data Block: Not Supported 00:25:57.379 Replay Protected Memory Block: Not Supported 00:25:57.379 00:25:57.379 Firmware Slot Information 00:25:57.379 ========================= 00:25:57.379 Active slot: 1 00:25:57.379 Slot 1 Firmware Revision: 25.01 00:25:57.379 00:25:57.379 00:25:57.379 Commands Supported and Effects 00:25:57.379 ============================== 00:25:57.379 Admin Commands 00:25:57.379 -------------- 00:25:57.379 Get Log Page (02h): Supported 00:25:57.379 Identify (06h): Supported 00:25:57.379 Abort (08h): Supported 00:25:57.379 Set Features (09h): Supported 00:25:57.379 Get Features (0Ah): Supported 00:25:57.379 Asynchronous Event Request (0Ch): Supported 00:25:57.379 Keep Alive (18h): Supported 00:25:57.379 I/O Commands 00:25:57.379 ------------ 00:25:57.379 Flush (00h): Supported LBA-Change 00:25:57.379 Write (01h): Supported LBA-Change 00:25:57.379 Read (02h): Supported 00:25:57.379 Compare (05h): Supported 00:25:57.379 Write Zeroes (08h): Supported LBA-Change 00:25:57.379 Dataset Management (09h): Supported LBA-Change 00:25:57.379 Copy (19h): Supported LBA-Change 00:25:57.379 00:25:57.379 Error Log 00:25:57.379 ========= 00:25:57.379 00:25:57.379 Arbitration 00:25:57.379 =========== 00:25:57.379 Arbitration Burst: 1 00:25:57.379 00:25:57.379 Power Management 00:25:57.379 ================ 00:25:57.379 Number of Power States: 1 00:25:57.379 Current Power State: Power State #0 00:25:57.379 Power State #0: 00:25:57.379 Max Power: 0.00 W 00:25:57.379 Non-Operational State: Operational 00:25:57.379 Entry Latency: Not Reported 00:25:57.379 Exit Latency: Not Reported 00:25:57.379 Relative Read Throughput: 0 00:25:57.379 Relative Read Latency: 0 00:25:57.379 Relative Write Throughput: 0 00:25:57.379 Relative Write Latency: 0 00:25:57.379 Idle Power: Not Reported 00:25:57.379 Active Power: Not Reported 00:25:57.379 Non-Operational Permissive Mode: Not Supported 00:25:57.379 00:25:57.379 Health Information 00:25:57.379 ================== 00:25:57.379 Critical Warnings: 00:25:57.379 Available Spare Space: OK 00:25:57.379 Temperature: OK 00:25:57.379 Device Reliability: OK 00:25:57.379 Read Only: No 00:25:57.379 Volatile Memory Backup: OK 00:25:57.379 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:57.379 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:57.379 Available Spare: 0% 00:25:57.379 Available Spare Threshold: 0% 00:25:57.379 Life Percentage Used:[2024-12-05 13:57:28.740380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.379 [2024-12-05 13:57:28.740391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22af690) 00:25:57.379 [2024-12-05 13:57:28.740423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.379 [2024-12-05 13:57:28.740450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311b80, cid 7, qid 0 00:25:57.379 [2024-12-05 13:57:28.740579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.379 [2024-12-05 13:57:28.740593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.379 [2024-12-05 13:57:28.740600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.379 [2024-12-05 13:57:28.740607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311b80) on tqpair=0x22af690 00:25:57.379 [2024-12-05 13:57:28.740652] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:57.379 [2024-12-05 13:57:28.740671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311100) on tqpair=0x22af690 00:25:57.380 [2024-12-05 13:57:28.740682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.380 [2024-12-05 13:57:28.740691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311280) on tqpair=0x22af690 00:25:57.380 [2024-12-05 13:57:28.740698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.380 [2024-12-05 13:57:28.740706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311400) on tqpair=0x22af690 00:25:57.380 [2024-12-05 13:57:28.740714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.380 [2024-12-05 13:57:28.740722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311580) on tqpair=0x22af690 00:25:57.380 [2024-12-05 13:57:28.740729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.380 [2024-12-05 13:57:28.740741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.740749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.740755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22af690) 00:25:57.380 [2024-12-05 13:57:28.740784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-12-05 13:57:28.740807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311580, cid 3, qid 0 00:25:57.380 [2024-12-05 13:57:28.740938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.380 [2024-12-05 13:57:28.740953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.380 [2024-12-05 13:57:28.740959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.740966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311580) on tqpair=0x22af690 00:25:57.380 [2024-12-05 13:57:28.740977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.740985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.740991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22af690) 00:25:57.380 [2024-12-05 13:57:28.741001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-12-05 13:57:28.741027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311580, cid 3, qid 0 00:25:57.380 [2024-12-05 13:57:28.741135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.380 [2024-12-05 13:57:28.741149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.380 [2024-12-05 13:57:28.741155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311580) on tqpair=0x22af690 00:25:57.380 [2024-12-05 13:57:28.741170] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:57.380 [2024-12-05 13:57:28.741177] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:57.380 [2024-12-05 13:57:28.741193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741208] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22af690) 00:25:57.380 [2024-12-05 13:57:28.741218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-12-05 13:57:28.741238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311580, cid 3, qid 0 00:25:57.380 [2024-12-05 13:57:28.741338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.380 [2024-12-05 13:57:28.741352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.380 [2024-12-05 13:57:28.741358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311580) on tqpair=0x22af690 00:25:57.380 [2024-12-05 13:57:28.741381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22af690) 00:25:57.380 [2024-12-05 13:57:28.741407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-12-05 13:57:28.741435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311580, cid 3, qid 0 00:25:57.380 [2024-12-05 13:57:28.741564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.380 [2024-12-05 13:57:28.741577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.380 [2024-12-05 13:57:28.741583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311580) on tqpair=0x22af690 00:25:57.380 [2024-12-05 13:57:28.741605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22af690) 00:25:57.380 [2024-12-05 13:57:28.741638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-12-05 13:57:28.741659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311580, cid 3, qid 0 00:25:57.380 [2024-12-05 13:57:28.741738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.380 [2024-12-05 13:57:28.741752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.380 [2024-12-05 13:57:28.741758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311580) on tqpair=0x22af690 00:25:57.380 [2024-12-05 13:57:28.741781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22af690) 00:25:57.380 [2024-12-05 13:57:28.741806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-12-05 13:57:28.741827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311580, cid 3, qid 0 00:25:57.380 [2024-12-05 13:57:28.741937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.380 [2024-12-05 13:57:28.741949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.380 [2024-12-05 13:57:28.741956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311580) on tqpair=0x22af690 00:25:57.380 [2024-12-05 13:57:28.741977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.741993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22af690) 00:25:57.380 [2024-12-05 13:57:28.742003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-12-05 13:57:28.742023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311580, cid 3, qid 0 00:25:57.380 [2024-12-05 13:57:28.742150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.380 [2024-12-05 13:57:28.742162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.380 [2024-12-05 13:57:28.742169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.742175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311580) on tqpair=0x22af690 00:25:57.380 [2024-12-05 13:57:28.742190] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.742199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.742206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22af690) 00:25:57.380 [2024-12-05 13:57:28.742216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-12-05 13:57:28.742236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311580, cid 3, qid 0 00:25:57.380 [2024-12-05 13:57:28.742363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.380 [2024-12-05 13:57:28.742374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.380 [2024-12-05 13:57:28.742381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.742388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311580) on tqpair=0x22af690 00:25:57.380 [2024-12-05 13:57:28.742403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.742412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.746430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22af690) 00:25:57.380 [2024-12-05 13:57:28.746448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-12-05 13:57:28.746472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311580, cid 3, qid 0 00:25:57.380 [2024-12-05 13:57:28.746591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:57.380 [2024-12-05 13:57:28.746606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:57.380 [2024-12-05 13:57:28.746613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:57.380 [2024-12-05 13:57:28.746619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311580) on tqpair=0x22af690 00:25:57.380 [2024-12-05 13:57:28.746632] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:25:57.380 0% 00:25:57.380 Data Units Read: 0 00:25:57.380 Data Units Written: 0 00:25:57.380 Host Read Commands: 0 00:25:57.380 Host Write Commands: 0 00:25:57.380 Controller Busy Time: 0 minutes 00:25:57.380 Power Cycles: 0 00:25:57.380 Power On Hours: 0 hours 00:25:57.380 Unsafe Shutdowns: 0 00:25:57.380 Unrecoverable Media Errors: 0 00:25:57.380 Lifetime Error Log Entries: 0 00:25:57.380 Warning Temperature Time: 0 minutes 00:25:57.380 Critical Temperature Time: 0 minutes 00:25:57.380 00:25:57.380 Number of Queues 00:25:57.380 ================ 00:25:57.380 Number of I/O Submission Queues: 127 00:25:57.381 Number of I/O Completion Queues: 127 00:25:57.381 00:25:57.381 Active Namespaces 00:25:57.381 ================= 00:25:57.381 Namespace ID:1 00:25:57.381 Error Recovery Timeout: Unlimited 00:25:57.381 Command Set Identifier: NVM (00h) 00:25:57.381 Deallocate: Supported 00:25:57.381 Deallocated/Unwritten Error: Not Supported 00:25:57.381 Deallocated Read Value: Unknown 00:25:57.381 Deallocate in Write Zeroes: Not Supported 00:25:57.381 Deallocated Guard Field: 0xFFFF 00:25:57.381 Flush: Supported 00:25:57.381 Reservation: Supported 00:25:57.381 Namespace Sharing Capabilities: Multiple Controllers 00:25:57.381 Size (in LBAs): 131072 (0GiB) 00:25:57.381 Capacity (in LBAs): 131072 (0GiB) 00:25:57.381 Utilization (in LBAs): 131072 (0GiB) 00:25:57.381 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:57.381 EUI64: ABCDEF0123456789 00:25:57.381 UUID: d15444f2-ee97-4dc4-a932-d26c26519dd3 00:25:57.381 Thin Provisioning: Not Supported 00:25:57.381 Per-NS Atomic Units: Yes 00:25:57.381 Atomic Boundary Size (Normal): 0 00:25:57.381 Atomic Boundary Size (PFail): 0 00:25:57.381 Atomic Boundary Offset: 0 00:25:57.381 Maximum Single Source Range Length: 65535 00:25:57.381 Maximum Copy Length: 65535 00:25:57.381 Maximum Source Range Count: 1 00:25:57.381 NGUID/EUI64 Never Reused: No 00:25:57.381 Namespace Write Protected: No 00:25:57.381 Number of LBA Formats: 1 00:25:57.381 Current LBA Format: LBA Format #00 00:25:57.381 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:57.381 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:57.381 rmmod nvme_tcp 00:25:57.381 rmmod nvme_fabrics 00:25:57.381 rmmod nvme_keyring 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2304677 ']' 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2304677 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2304677 ']' 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2304677 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2304677 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2304677' 00:25:57.381 killing process with pid 2304677 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2304677 00:25:57.381 13:57:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2304677 00:25:57.641 13:57:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:57.641 13:57:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:57.641 13:57:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:57.641 13:57:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:57.641 13:57:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:57.641 13:57:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:57.641 13:57:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:57.641 13:57:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:57.641 13:57:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:57.641 13:57:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.641 13:57:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.641 13:57:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:00.178 00:26:00.178 real 0m5.612s 00:26:00.178 user 0m4.561s 00:26:00.178 sys 0m2.023s 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:00.178 ************************************ 00:26:00.178 END TEST nvmf_identify 00:26:00.178 ************************************ 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.178 ************************************ 00:26:00.178 START TEST nvmf_perf 00:26:00.178 ************************************ 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:00.178 * Looking for test storage... 00:26:00.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:00.178 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:00.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.179 --rc genhtml_branch_coverage=1 00:26:00.179 --rc genhtml_function_coverage=1 00:26:00.179 --rc genhtml_legend=1 00:26:00.179 --rc geninfo_all_blocks=1 00:26:00.179 --rc geninfo_unexecuted_blocks=1 00:26:00.179 00:26:00.179 ' 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:00.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.179 --rc genhtml_branch_coverage=1 00:26:00.179 --rc genhtml_function_coverage=1 00:26:00.179 --rc genhtml_legend=1 00:26:00.179 --rc geninfo_all_blocks=1 00:26:00.179 --rc geninfo_unexecuted_blocks=1 00:26:00.179 00:26:00.179 ' 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:00.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.179 --rc genhtml_branch_coverage=1 00:26:00.179 --rc genhtml_function_coverage=1 00:26:00.179 --rc genhtml_legend=1 00:26:00.179 --rc geninfo_all_blocks=1 00:26:00.179 --rc geninfo_unexecuted_blocks=1 00:26:00.179 00:26:00.179 ' 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:00.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.179 --rc genhtml_branch_coverage=1 00:26:00.179 --rc genhtml_function_coverage=1 00:26:00.179 --rc genhtml_legend=1 00:26:00.179 --rc geninfo_all_blocks=1 00:26:00.179 --rc geninfo_unexecuted_blocks=1 00:26:00.179 00:26:00.179 ' 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:00.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:00.179 13:57:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:02.079 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:02.079 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:02.080 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:02.080 Found net devices under 0000:09:00.0: cvl_0_0 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:02.080 Found net devices under 0000:09:00.1: cvl_0_1 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:02.080 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:02.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:26:02.340 00:26:02.340 --- 10.0.0.2 ping statistics --- 00:26:02.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.340 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:26:02.340 00:26:02.340 --- 10.0.0.1 ping statistics --- 00:26:02.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.340 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2306766 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2306766 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2306766 ']' 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.340 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:02.340 [2024-12-05 13:57:33.726945] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:26:02.340 [2024-12-05 13:57:33.727023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.340 [2024-12-05 13:57:33.797800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:02.340 [2024-12-05 13:57:33.853754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.340 [2024-12-05 13:57:33.853805] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.340 [2024-12-05 13:57:33.853834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.340 [2024-12-05 13:57:33.853846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.340 [2024-12-05 13:57:33.853863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.340 [2024-12-05 13:57:33.855759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.340 [2024-12-05 13:57:33.855883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.340 [2024-12-05 13:57:33.855933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:02.340 [2024-12-05 13:57:33.855936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.598 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.598 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:26:02.598 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:02.598 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:02.598 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:02.598 13:57:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.598 13:57:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:02.598 13:57:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:05.875 13:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:05.875 13:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:05.875 13:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:26:05.875 13:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:06.442 13:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:06.442 13:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:26:06.442 13:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:06.442 13:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:06.442 13:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:06.442 [2024-12-05 13:57:37.937752] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.442 13:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:07.007 13:57:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:07.007 13:57:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:07.007 13:57:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:07.007 13:57:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:07.265 13:57:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:07.523 [2024-12-05 13:57:39.037697] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.780 13:57:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:08.037 13:57:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:26:08.037 13:57:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:26:08.037 13:57:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:08.037 13:57:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:26:09.411 Initializing NVMe Controllers 00:26:09.411 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:26:09.411 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:26:09.411 Initialization complete. Launching workers. 00:26:09.411 ======================================================== 00:26:09.411 Latency(us) 00:26:09.411 Device Information : IOPS MiB/s Average min max 00:26:09.411 PCIE (0000:0b:00.0) NSID 1 from core 0: 83989.81 328.09 380.50 37.99 6310.76 00:26:09.411 ======================================================== 00:26:09.411 Total : 83989.81 328.09 380.50 37.99 6310.76 00:26:09.411 00:26:09.411 13:57:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:10.379 Initializing NVMe Controllers 00:26:10.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:10.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:10.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:10.379 Initialization complete. Launching workers. 00:26:10.379 ======================================================== 00:26:10.379 Latency(us) 00:26:10.379 Device Information : IOPS MiB/s Average min max 00:26:10.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 100.85 0.39 10084.18 147.54 45826.73 00:26:10.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 68.89 0.27 14861.25 4985.83 47924.68 00:26:10.379 ======================================================== 00:26:10.379 Total : 169.74 0.66 12023.11 147.54 47924.68 00:26:10.379 00:26:10.379 13:57:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:11.749 Initializing NVMe Controllers 00:26:11.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:11.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:11.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:11.749 Initialization complete. Launching workers. 00:26:11.749 ======================================================== 00:26:11.750 Latency(us) 00:26:11.750 Device Information : IOPS MiB/s Average min max 00:26:11.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8494.25 33.18 3767.32 779.46 11505.49 00:26:11.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3838.95 15.00 8364.70 6813.77 22093.83 00:26:11.750 ======================================================== 00:26:11.750 Total : 12333.20 48.18 5198.34 779.46 22093.83 00:26:11.750 00:26:11.750 13:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:11.750 13:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:11.750 13:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:14.280 Initializing NVMe Controllers 00:26:14.280 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.280 Controller IO queue size 128, less than required. 00:26:14.280 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:14.280 Controller IO queue size 128, less than required. 00:26:14.280 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:14.280 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:14.280 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:14.280 Initialization complete. Launching workers. 00:26:14.280 ======================================================== 00:26:14.280 Latency(us) 00:26:14.280 Device Information : IOPS MiB/s Average min max 00:26:14.280 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1641.93 410.48 79093.12 47238.68 147465.64 00:26:14.280 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 602.47 150.62 222955.05 110841.42 330011.94 00:26:14.280 ======================================================== 00:26:14.280 Total : 2244.40 561.10 117710.55 47238.68 330011.94 00:26:14.280 00:26:14.280 13:57:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:14.537 No valid NVMe controllers or AIO or URING devices found 00:26:14.537 Initializing NVMe Controllers 00:26:14.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.537 Controller IO queue size 128, less than required. 00:26:14.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:14.537 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:14.537 Controller IO queue size 128, less than required. 00:26:14.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:14.537 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:14.537 WARNING: Some requested NVMe devices were skipped 00:26:14.537 13:57:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:17.067 Initializing NVMe Controllers 00:26:17.067 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:17.067 Controller IO queue size 128, less than required. 00:26:17.067 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:17.067 Controller IO queue size 128, less than required. 00:26:17.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:17.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:17.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:17.068 Initialization complete. Launching workers. 00:26:17.068 00:26:17.068 ==================== 00:26:17.068 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:17.068 TCP transport: 00:26:17.068 polls: 10805 00:26:17.068 idle_polls: 7583 00:26:17.068 sock_completions: 3222 00:26:17.068 nvme_completions: 5527 00:26:17.068 submitted_requests: 8284 00:26:17.068 queued_requests: 1 00:26:17.068 00:26:17.068 ==================== 00:26:17.068 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:17.068 TCP transport: 00:26:17.068 polls: 10963 00:26:17.068 idle_polls: 7180 00:26:17.068 sock_completions: 3783 00:26:17.068 nvme_completions: 6423 00:26:17.068 submitted_requests: 9588 00:26:17.068 queued_requests: 1 00:26:17.068 ======================================================== 00:26:17.068 Latency(us) 00:26:17.068 Device Information : IOPS MiB/s Average min max 00:26:17.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1378.05 344.51 94640.39 63050.87 144944.30 00:26:17.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1601.49 400.37 80860.28 40685.34 133451.36 00:26:17.068 ======================================================== 00:26:17.068 Total : 2979.54 744.88 87233.64 40685.34 144944.30 00:26:17.068 00:26:17.068 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:17.068 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.325 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:17.325 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:17.325 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:17.325 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:17.325 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:26:17.325 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:17.325 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:26:17.325 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:17.325 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:17.325 rmmod nvme_tcp 00:26:17.325 rmmod nvme_fabrics 00:26:17.325 rmmod nvme_keyring 00:26:17.325 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:17.325 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:26:17.325 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:26:17.325 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2306766 ']' 00:26:17.326 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2306766 00:26:17.326 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2306766 ']' 00:26:17.326 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2306766 00:26:17.326 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:26:17.326 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.584 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2306766 00:26:17.584 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:17.584 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:17.584 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2306766' 00:26:17.584 killing process with pid 2306766 00:26:17.584 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2306766 00:26:17.584 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2306766 00:26:18.958 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:18.958 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:18.958 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:18.958 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:26:18.958 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:26:18.958 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:26:18.958 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:18.958 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:18.958 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:18.958 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.958 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.958 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.495 13:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:21.495 00:26:21.495 real 0m21.271s 00:26:21.495 user 1m5.040s 00:26:21.495 sys 0m5.640s 00:26:21.495 13:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:21.495 13:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:21.495 ************************************ 00:26:21.495 END TEST nvmf_perf 00:26:21.495 ************************************ 00:26:21.495 13:57:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.496 ************************************ 00:26:21.496 START TEST nvmf_fio_host 00:26:21.496 ************************************ 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:21.496 * Looking for test storage... 00:26:21.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:21.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.496 --rc genhtml_branch_coverage=1 00:26:21.496 --rc genhtml_function_coverage=1 00:26:21.496 --rc genhtml_legend=1 00:26:21.496 --rc geninfo_all_blocks=1 00:26:21.496 --rc geninfo_unexecuted_blocks=1 00:26:21.496 00:26:21.496 ' 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:21.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.496 --rc genhtml_branch_coverage=1 00:26:21.496 --rc genhtml_function_coverage=1 00:26:21.496 --rc genhtml_legend=1 00:26:21.496 --rc geninfo_all_blocks=1 00:26:21.496 --rc geninfo_unexecuted_blocks=1 00:26:21.496 00:26:21.496 ' 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:21.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.496 --rc genhtml_branch_coverage=1 00:26:21.496 --rc genhtml_function_coverage=1 00:26:21.496 --rc genhtml_legend=1 00:26:21.496 --rc geninfo_all_blocks=1 00:26:21.496 --rc geninfo_unexecuted_blocks=1 00:26:21.496 00:26:21.496 ' 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:21.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.496 --rc genhtml_branch_coverage=1 00:26:21.496 --rc genhtml_function_coverage=1 00:26:21.496 --rc genhtml_legend=1 00:26:21.496 --rc geninfo_all_blocks=1 00:26:21.496 --rc geninfo_unexecuted_blocks=1 00:26:21.496 00:26:21.496 ' 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.496 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:21.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:21.497 13:57:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:23.485 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:23.486 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:23.486 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:23.486 Found net devices under 0000:09:00.0: cvl_0_0 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:23.486 Found net devices under 0000:09:00.1: cvl_0_1 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:23.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:26:23.486 00:26:23.486 --- 10.0.0.2 ping statistics --- 00:26:23.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.486 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:23.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:26:23.486 00:26:23.486 --- 10.0.0.1 ping statistics --- 00:26:23.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.486 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2310654 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2310654 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2310654 ']' 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.486 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.487 13:57:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.747 [2024-12-05 13:57:55.007004] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:26:23.747 [2024-12-05 13:57:55.007097] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.747 [2024-12-05 13:57:55.091524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:23.747 [2024-12-05 13:57:55.149628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.747 [2024-12-05 13:57:55.149701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.747 [2024-12-05 13:57:55.149715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.747 [2024-12-05 13:57:55.149727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.747 [2024-12-05 13:57:55.149736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.747 [2024-12-05 13:57:55.151432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.747 [2024-12-05 13:57:55.151456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.747 [2024-12-05 13:57:55.151483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:23.747 [2024-12-05 13:57:55.151487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.004 13:57:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.004 13:57:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:26:24.004 13:57:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:24.004 [2024-12-05 13:57:55.527843] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.262 13:57:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:24.262 13:57:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:24.262 13:57:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.262 13:57:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:24.520 Malloc1 00:26:24.520 13:57:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:24.778 13:57:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:25.035 13:57:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.292 [2024-12-05 13:57:56.752981] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.292 13:57:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:25.550 13:57:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:25.807 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:25.807 fio-3.35 00:26:25.807 Starting 1 thread 00:26:28.331 00:26:28.331 test: (groupid=0, jobs=1): err= 0: pid=2311097: Thu Dec 5 13:57:59 2024 00:26:28.331 read: IOPS=8487, BW=33.2MiB/s (34.8MB/s)(66.5MiB/2007msec) 00:26:28.331 slat (nsec): min=1887, max=139679, avg=2470.50, stdev=1689.51 00:26:28.331 clat (usec): min=2714, max=13419, avg=8231.90, stdev=723.15 00:26:28.331 lat (usec): min=2737, max=13421, avg=8234.37, stdev=723.05 00:26:28.331 clat percentiles (usec): 00:26:28.331 | 1.00th=[ 6652], 5.00th=[ 7111], 10.00th=[ 7373], 20.00th=[ 7635], 00:26:28.331 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8455], 00:26:28.331 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9110], 95.00th=[ 9372], 00:26:28.331 | 99.00th=[ 9896], 99.50th=[10159], 99.90th=[12125], 99.95th=[12518], 00:26:28.331 | 99.99th=[13304] 00:26:28.331 bw ( KiB/s): min=32624, max=35080, per=99.97%, avg=33940.00, stdev=1102.66, samples=4 00:26:28.331 iops : min= 8156, max= 8770, avg=8485.00, stdev=275.66, samples=4 00:26:28.331 write: IOPS=8487, BW=33.2MiB/s (34.8MB/s)(66.5MiB/2007msec); 0 zone resets 00:26:28.331 slat (usec): min=2, max=102, avg= 2.53, stdev= 1.23 00:26:28.331 clat (usec): min=1019, max=13051, avg=6810.02, stdev=600.44 00:26:28.331 lat (usec): min=1026, max=13053, avg=6812.56, stdev=600.40 00:26:28.331 clat percentiles (usec): 00:26:28.331 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6325], 00:26:28.331 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 6980], 00:26:28.331 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7504], 95.00th=[ 7701], 00:26:28.331 | 99.00th=[ 8094], 99.50th=[ 8225], 99.90th=[11731], 99.95th=[12125], 00:26:28.331 | 99.99th=[12911] 00:26:28.331 bw ( KiB/s): min=33000, max=34648, per=99.98%, avg=33942.00, stdev=830.26, samples=4 00:26:28.331 iops : min= 8250, max= 8662, avg=8485.50, stdev=207.57, samples=4 00:26:28.331 lat (msec) : 2=0.03%, 4=0.10%, 10=99.47%, 20=0.40% 00:26:28.331 cpu : usr=62.61%, sys=35.74%, ctx=82, majf=0, minf=31 00:26:28.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:28.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:28.331 issued rwts: total=17034,17034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:28.331 00:26:28.331 Run status group 0 (all jobs): 00:26:28.331 READ: bw=33.2MiB/s (34.8MB/s), 33.2MiB/s-33.2MiB/s (34.8MB/s-34.8MB/s), io=66.5MiB (69.8MB), run=2007-2007msec 00:26:28.331 WRITE: bw=33.2MiB/s (34.8MB/s), 33.2MiB/s-33.2MiB/s (34.8MB/s-34.8MB/s), io=66.5MiB (69.8MB), run=2007-2007msec 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:28.331 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:28.332 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:28.332 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:28.332 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:28.332 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:28.332 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:28.332 13:57:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:28.589 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:28.589 fio-3.35 00:26:28.589 Starting 1 thread 00:26:30.483 [2024-12-05 13:58:01.563053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f7e50 is same with the state(6) to be set 00:26:30.483 [2024-12-05 13:58:01.563115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f7e50 is same with the state(6) to be set 00:26:31.048 00:26:31.048 test: (groupid=0, jobs=1): err= 0: pid=2311433: Thu Dec 5 13:58:02 2024 00:26:31.048 read: IOPS=8394, BW=131MiB/s (138MB/s)(264MiB/2009msec) 00:26:31.048 slat (nsec): min=2772, max=93640, avg=3576.04, stdev=1627.88 00:26:31.048 clat (usec): min=2228, max=16658, avg=8688.36, stdev=1964.83 00:26:31.048 lat (usec): min=2232, max=16661, avg=8691.94, stdev=1964.87 00:26:31.048 clat percentiles (usec): 00:26:31.048 | 1.00th=[ 4555], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 7046], 00:26:31.048 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9110], 00:26:31.048 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11207], 95.00th=[11994], 00:26:31.048 | 99.00th=[13698], 99.50th=[14353], 99.90th=[16057], 99.95th=[16319], 00:26:31.048 | 99.99th=[16581] 00:26:31.048 bw ( KiB/s): min=60160, max=79040, per=52.37%, avg=70344.00, stdev=7810.73, samples=4 00:26:31.048 iops : min= 3760, max= 4940, avg=4396.50, stdev=488.17, samples=4 00:26:31.048 write: IOPS=4959, BW=77.5MiB/s (81.3MB/s)(143MiB/1849msec); 0 zone resets 00:26:31.048 slat (usec): min=30, max=185, avg=33.46, stdev= 5.66 00:26:31.048 clat (usec): min=6463, max=19809, avg=11422.79, stdev=1843.02 00:26:31.048 lat (usec): min=6503, max=19840, avg=11456.25, stdev=1843.16 00:26:31.048 clat percentiles (usec): 00:26:31.048 | 1.00th=[ 7439], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:26:31.048 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11338], 60.00th=[11863], 00:26:31.048 | 70.00th=[12256], 80.00th=[12780], 90.00th=[13829], 95.00th=[14746], 00:26:31.048 | 99.00th=[16188], 99.50th=[17171], 99.90th=[18744], 99.95th=[19006], 00:26:31.048 | 99.99th=[19792] 00:26:31.048 bw ( KiB/s): min=64256, max=80640, per=91.71%, avg=72784.00, stdev=6816.35, samples=4 00:26:31.048 iops : min= 4016, max= 5040, avg=4549.00, stdev=426.02, samples=4 00:26:31.048 lat (msec) : 4=0.23%, 10=55.60%, 20=44.17% 00:26:31.048 cpu : usr=79.99%, sys=18.87%, ctx=52, majf=0, minf=55 00:26:31.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:31.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:31.048 issued rwts: total=16865,9171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:31.048 00:26:31.048 Run status group 0 (all jobs): 00:26:31.048 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=264MiB (276MB), run=2009-2009msec 00:26:31.048 WRITE: bw=77.5MiB/s (81.3MB/s), 77.5MiB/s-77.5MiB/s (81.3MB/s-81.3MB/s), io=143MiB (150MB), run=1849-1849msec 00:26:31.048 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:31.048 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:31.048 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:31.048 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:31.048 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:31.048 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:31.048 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:31.305 rmmod nvme_tcp 00:26:31.305 rmmod nvme_fabrics 00:26:31.305 rmmod nvme_keyring 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2310654 ']' 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2310654 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2310654 ']' 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2310654 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2310654 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2310654' 00:26:31.305 killing process with pid 2310654 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2310654 00:26:31.305 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2310654 00:26:31.563 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:31.563 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:31.563 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:31.563 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:26:31.563 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:26:31.563 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:31.563 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:31.563 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:31.563 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:31.563 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.563 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.563 13:58:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.469 13:58:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:33.469 00:26:33.469 real 0m12.409s 00:26:33.469 user 0m36.997s 00:26:33.469 sys 0m4.012s 00:26:33.469 13:58:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.469 13:58:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.469 ************************************ 00:26:33.469 END TEST nvmf_fio_host 00:26:33.469 ************************************ 00:26:33.469 13:58:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:33.470 13:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:33.470 13:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.470 13:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.734 ************************************ 00:26:33.734 START TEST nvmf_failover 00:26:33.734 ************************************ 00:26:33.734 13:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:33.734 * Looking for test storage... 00:26:33.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:33.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.734 --rc genhtml_branch_coverage=1 00:26:33.734 --rc genhtml_function_coverage=1 00:26:33.734 --rc genhtml_legend=1 00:26:33.734 --rc geninfo_all_blocks=1 00:26:33.734 --rc geninfo_unexecuted_blocks=1 00:26:33.734 00:26:33.734 ' 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:33.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.734 --rc genhtml_branch_coverage=1 00:26:33.734 --rc genhtml_function_coverage=1 00:26:33.734 --rc genhtml_legend=1 00:26:33.734 --rc geninfo_all_blocks=1 00:26:33.734 --rc geninfo_unexecuted_blocks=1 00:26:33.734 00:26:33.734 ' 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:33.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.734 --rc genhtml_branch_coverage=1 00:26:33.734 --rc genhtml_function_coverage=1 00:26:33.734 --rc genhtml_legend=1 00:26:33.734 --rc geninfo_all_blocks=1 00:26:33.734 --rc geninfo_unexecuted_blocks=1 00:26:33.734 00:26:33.734 ' 00:26:33.734 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:33.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.734 --rc genhtml_branch_coverage=1 00:26:33.735 --rc genhtml_function_coverage=1 00:26:33.735 --rc genhtml_legend=1 00:26:33.735 --rc geninfo_all_blocks=1 00:26:33.735 --rc geninfo_unexecuted_blocks=1 00:26:33.735 00:26:33.735 ' 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.735 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:33.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:26:33.736 13:58:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:36.269 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:36.269 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:36.269 Found net devices under 0000:09:00.0: cvl_0_0 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.269 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:36.270 Found net devices under 0000:09:00.1: cvl_0_1 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:36.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:26:36.270 00:26:36.270 --- 10.0.0.2 ping statistics --- 00:26:36.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.270 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:26:36.270 00:26:36.270 --- 10.0.0.1 ping statistics --- 00:26:36.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.270 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2313747 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2313747 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2313747 ']' 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:36.270 [2024-12-05 13:58:07.545886] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:26:36.270 [2024-12-05 13:58:07.545981] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.270 [2024-12-05 13:58:07.616638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:36.270 [2024-12-05 13:58:07.667854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.270 [2024-12-05 13:58:07.667909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.270 [2024-12-05 13:58:07.667938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.270 [2024-12-05 13:58:07.667949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.270 [2024-12-05 13:58:07.667959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.270 [2024-12-05 13:58:07.669441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.270 [2024-12-05 13:58:07.669494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.270 [2024-12-05 13:58:07.669498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:36.270 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:36.529 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.529 13:58:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:36.529 [2024-12-05 13:58:08.046607] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.787 13:58:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:37.046 Malloc0 00:26:37.046 13:58:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:37.304 13:58:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:37.562 13:58:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:37.820 [2024-12-05 13:58:09.209155] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:37.820 13:58:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:38.079 [2024-12-05 13:58:09.538115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:38.079 13:58:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:38.337 [2024-12-05 13:58:09.855200] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:38.596 13:58:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2314046 00:26:38.596 13:58:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:38.596 13:58:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:38.596 13:58:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2314046 /var/tmp/bdevperf.sock 00:26:38.596 13:58:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2314046 ']' 00:26:38.596 13:58:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:38.596 13:58:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.596 13:58:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:38.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:38.596 13:58:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.596 13:58:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:38.854 13:58:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.855 13:58:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:38.855 13:58:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:39.112 NVMe0n1 00:26:39.112 13:58:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:39.678 00:26:39.678 13:58:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2314180 00:26:39.678 13:58:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:39.678 13:58:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:40.611 13:58:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.871 [2024-12-05 13:58:12.324825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.324930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.324947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.324960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.324973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.324985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.324998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.871 [2024-12-05 13:58:12.325734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.325993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.326004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.326015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.326026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.326037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.326048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.326060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.326074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.326086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 [2024-12-05 13:58:12.326097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742aa0 is same with the state(6) to be set 00:26:40.872 13:58:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:44.156 13:58:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:44.415 00:26:44.415 13:58:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:44.741 [2024-12-05 13:58:16.120147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7437c0 is same with the state(6) to be set 00:26:44.741 [2024-12-05 13:58:16.120214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7437c0 is same with the state(6) to be set 00:26:44.741 [2024-12-05 13:58:16.120229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7437c0 is same with the state(6) to be set 00:26:44.741 [2024-12-05 13:58:16.120241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7437c0 is same with the state(6) to be set 00:26:44.741 [2024-12-05 13:58:16.120254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7437c0 is same with the state(6) to be set 00:26:44.741 [2024-12-05 13:58:16.120266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7437c0 is same with the state(6) to be set 00:26:44.741 [2024-12-05 13:58:16.120278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7437c0 is same with the state(6) to be set 00:26:44.741 [2024-12-05 13:58:16.120290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7437c0 is same with the state(6) to be set 00:26:44.741 [2024-12-05 13:58:16.120301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7437c0 is same with the state(6) to be set 00:26:44.741 13:58:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:48.027 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:48.027 [2024-12-05 13:58:19.402322] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.027 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:48.960 13:58:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:49.217 13:58:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2314180 00:26:55.833 { 00:26:55.833 "results": [ 00:26:55.833 { 00:26:55.833 "job": "NVMe0n1", 00:26:55.833 "core_mask": "0x1", 00:26:55.833 "workload": "verify", 00:26:55.833 "status": "finished", 00:26:55.833 "verify_range": { 00:26:55.833 "start": 0, 00:26:55.833 "length": 16384 00:26:55.833 }, 00:26:55.833 "queue_depth": 128, 00:26:55.833 "io_size": 4096, 00:26:55.833 "runtime": 15.045072, 00:26:55.833 "iops": 8575.299606409328, 00:26:55.833 "mibps": 33.497264087536436, 00:26:55.833 "io_failed": 5364, 00:26:55.833 "io_timeout": 0, 00:26:55.833 "avg_latency_us": 14264.998309051722, 00:26:55.833 "min_latency_us": 600.7466666666667, 00:26:55.833 "max_latency_us": 43884.847407407404 00:26:55.833 } 00:26:55.833 ], 00:26:55.833 "core_count": 1 00:26:55.833 } 00:26:55.833 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2314046 00:26:55.833 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2314046 ']' 00:26:55.834 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2314046 00:26:55.834 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:55.834 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:55.834 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2314046 00:26:55.834 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:55.834 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:55.834 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2314046' 00:26:55.834 killing process with pid 2314046 00:26:55.834 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2314046 00:26:55.834 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2314046 00:26:55.834 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:55.834 [2024-12-05 13:58:09.923986] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:26:55.834 [2024-12-05 13:58:09.924071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314046 ] 00:26:55.834 [2024-12-05 13:58:09.992211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.834 [2024-12-05 13:58:10.054833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.834 Running I/O for 15 seconds... 00:26:55.834 8562.00 IOPS, 33.45 MiB/s [2024-12-05T12:58:27.360Z] [2024-12-05 13:58:12.327194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.327972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.327986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.328000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.328014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.328028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.328041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.328056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.328069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.328084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.328097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.328111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.328124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.328139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.328152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.328167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.834 [2024-12-05 13:58:12.328181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.834 [2024-12-05 13:58:12.328196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.834 [2024-12-05 13:58:12.328210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.328975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.328989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.329003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.329016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.329031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.329048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.329063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.329077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.329092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.835 [2024-12-05 13:58:12.329105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.329125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.835 [2024-12-05 13:58:12.329139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.329154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.835 [2024-12-05 13:58:12.329168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.329182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.835 [2024-12-05 13:58:12.329195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.329209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.835 [2024-12-05 13:58:12.329223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.329237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.835 [2024-12-05 13:58:12.329251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.329265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.835 [2024-12-05 13:58:12.329278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.329293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.835 [2024-12-05 13:58:12.329306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.329320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.835 [2024-12-05 13:58:12.329333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.835 [2024-12-05 13:58:12.329348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.835 [2024-12-05 13:58:12.329362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.329977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.329992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.836 [2024-12-05 13:58:12.330504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-12-05 13:58:12.330538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.836 [2024-12-05 13:58:12.330555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80216 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.330568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.330585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.330597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.330608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80224 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.330620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.330632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.330642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.330653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80232 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.330665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.330678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.330688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.330699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80240 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.330711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.330723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.330734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.330744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80248 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.330756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.330768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.330779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.330789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80256 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.330801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.330814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.330824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.330835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80264 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.330847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.330859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.330869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.330880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80272 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.330896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.330910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.330920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.330931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80280 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.330943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.330956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.330966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.330976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80288 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.330989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.331001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.331011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.331022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80296 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.331034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.331046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.331057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.331067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80304 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.331079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.331092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.331102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.331112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80312 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.331125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.331137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.331147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.331163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80320 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.331176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.331188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.331199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.331210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80328 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.331222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.331234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.837 [2024-12-05 13:58:12.331244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.837 [2024-12-05 13:58:12.331258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80336 len:8 PRP1 0x0 PRP2 0x0 00:26:55.837 [2024-12-05 13:58:12.331271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.331332] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:55.837 [2024-12-05 13:58:12.331368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.837 [2024-12-05 13:58:12.331386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.331400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.837 [2024-12-05 13:58:12.331413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.331437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.837 [2024-12-05 13:58:12.331451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.331464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.837 [2024-12-05 13:58:12.331476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:12.331489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:55.837 [2024-12-05 13:58:12.334795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:55.837 [2024-12-05 13:58:12.334831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56e570 (9): Bad file descriptor 00:26:55.837 [2024-12-05 13:58:12.358232] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:55.837 8450.50 IOPS, 33.01 MiB/s [2024-12-05T12:58:27.363Z] 8562.00 IOPS, 33.45 MiB/s [2024-12-05T12:58:27.363Z] 8611.75 IOPS, 33.64 MiB/s [2024-12-05T12:58:27.363Z] [2024-12-05 13:58:16.123149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.837 [2024-12-05 13:58:16.123193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:16.123237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.837 [2024-12-05 13:58:16.123254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:16.123270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.837 [2024-12-05 13:58:16.123284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:16.123299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.837 [2024-12-05 13:58:16.123313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:16.123328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.837 [2024-12-05 13:58:16.123343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:16.123358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.837 [2024-12-05 13:58:16.123382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:16.123397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.837 [2024-12-05 13:58:16.123411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.837 [2024-12-05 13:58:16.123435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.838 [2024-12-05 13:58:16.123450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.838 [2024-12-05 13:58:16.123478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.838 [2024-12-05 13:58:16.123506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.838 [2024-12-05 13:58:16.123534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.123973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.123986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.124001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.124014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.124029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.124042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.124056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.124069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.124084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.124101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.124116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.124130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.124145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.124158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.124173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.124186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.124200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.124214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.838 [2024-12-05 13:58:16.124228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.838 [2024-12-05 13:58:16.124242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.124973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.124988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.125016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.125044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.125072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.125100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.125128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.125157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.125185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.125217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.125245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.125273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.125302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.125331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.125358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.839 [2024-12-05 13:58:16.125387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.839 [2024-12-05 13:58:16.125400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.125438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.125457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88024 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.125470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.125488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.125500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.125511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88032 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.125524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.125536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.125547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.125558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88040 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.125571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.125584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.125598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.125610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88048 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.125622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.125635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.125645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.125656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88056 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.125669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.125682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.125692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.125703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88064 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.125716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.125728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.125739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.125750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88072 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.125763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.125776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.125786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.125797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88080 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.125809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.125822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.125832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.125843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88088 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.125855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.125868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.125878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.125889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88096 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.125901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.125914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.125924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.125935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88104 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.125948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.125964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.125975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.125986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88112 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.125998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.126011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.126021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.126032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88120 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.126044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.126057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.126067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.126078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88128 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.126090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.126103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.126114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.126124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88136 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.126137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.126150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.126160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.126170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88144 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.126182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.126195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.126205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.126216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88152 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.126228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.126240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.126250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.126261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88160 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.126273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.126286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.126296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.126306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88168 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.126325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.126339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.126349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.126360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88176 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.126372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.126384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.126395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.126405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88184 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.126425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.126441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.126452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.126463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88192 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.126475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.126487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.126498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.126509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88200 len:8 PRP1 0x0 PRP2 0x0 00:26:55.840 [2024-12-05 13:58:16.126521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.840 [2024-12-05 13:58:16.126534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.840 [2024-12-05 13:58:16.126544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.840 [2024-12-05 13:58:16.126555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88208 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.126568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.126580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.126591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.126602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88216 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.126613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.126627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.126637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.126648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88224 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.126660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.126673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.126683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.126697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88232 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.126711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.126724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.126734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.126745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88240 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.126757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.126770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.126780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.126791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88248 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.126803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.126816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.126826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.126837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88256 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.126849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.126862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.126872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.126883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88264 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.126896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.126908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.126919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.126929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88272 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.126941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.126955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.126965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.126976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88280 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.126988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.127011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.127022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88288 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.127034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.127061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.127072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88296 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.127085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.127108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.127119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88304 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.127131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.127154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.127165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88312 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.127177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.127200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.127211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88320 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.127223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.127246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.127257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88328 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.127269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.127292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.127303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88336 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.127316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.127339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.127350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88344 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.127362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.127385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.127396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88352 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.127408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.127444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.127455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88360 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.127467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.127491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.127501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88368 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.127514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.127545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.127556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88376 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.127568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.127592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.127603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87448 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.127615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.841 [2024-12-05 13:58:16.127644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.841 [2024-12-05 13:58:16.127656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87456 len:8 PRP1 0x0 PRP2 0x0 00:26:55.841 [2024-12-05 13:58:16.127668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.841 [2024-12-05 13:58:16.127681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.842 [2024-12-05 13:58:16.127691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.842 [2024-12-05 13:58:16.127702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87464 len:8 PRP1 0x0 PRP2 0x0 00:26:55.842 [2024-12-05 13:58:16.127715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:16.127728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.842 [2024-12-05 13:58:16.127738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.842 [2024-12-05 13:58:16.127749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87472 len:8 PRP1 0x0 PRP2 0x0 00:26:55.842 [2024-12-05 13:58:16.127761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:16.127774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.842 [2024-12-05 13:58:16.127784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.842 [2024-12-05 13:58:16.127795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87480 len:8 PRP1 0x0 PRP2 0x0 00:26:55.842 [2024-12-05 13:58:16.127811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:16.127824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.842 [2024-12-05 13:58:16.127835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.842 [2024-12-05 13:58:16.127846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87488 len:8 PRP1 0x0 PRP2 0x0 00:26:55.842 [2024-12-05 13:58:16.127858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:16.127871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.842 [2024-12-05 13:58:16.127882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.842 [2024-12-05 13:58:16.127892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87496 len:8 PRP1 0x0 PRP2 0x0 00:26:55.842 [2024-12-05 13:58:16.127904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:16.127971] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:55.842 [2024-12-05 13:58:16.128014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.842 [2024-12-05 13:58:16.128033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:16.128048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.842 [2024-12-05 13:58:16.128061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:16.128074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.842 [2024-12-05 13:58:16.128087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:16.128100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.842 [2024-12-05 13:58:16.128113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:16.128132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:55.842 [2024-12-05 13:58:16.128175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56e570 (9): Bad file descriptor 00:26:55.842 [2024-12-05 13:58:16.131410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:55.842 8515.40 IOPS, 33.26 MiB/s [2024-12-05T12:58:27.368Z] [2024-12-05 13:58:16.207243] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:55.842 8505.17 IOPS, 33.22 MiB/s [2024-12-05T12:58:27.368Z] 8526.29 IOPS, 33.31 MiB/s [2024-12-05T12:58:27.368Z] 8532.00 IOPS, 33.33 MiB/s [2024-12-05T12:58:27.368Z] 8556.00 IOPS, 33.42 MiB/s [2024-12-05T12:58:27.368Z] [2024-12-05 13:58:20.694223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.842 [2024-12-05 13:58:20.694288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.842 [2024-12-05 13:58:20.694332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.842 [2024-12-05 13:58:20.694375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.842 [2024-12-05 13:58:20.694404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.842 [2024-12-05 13:58:20.694442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.842 [2024-12-05 13:58:20.694471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.842 [2024-12-05 13:58:20.694499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.842 [2024-12-05 13:58:20.694527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.842 [2024-12-05 13:58:20.694556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.842 [2024-12-05 13:58:20.694584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.842 [2024-12-05 13:58:20.694613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.842 [2024-12-05 13:58:20.694641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.842 [2024-12-05 13:58:20.694670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.842 [2024-12-05 13:58:20.694698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.842 [2024-12-05 13:58:20.694730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.842 [2024-12-05 13:58:20.694759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.842 [2024-12-05 13:58:20.694787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.842 [2024-12-05 13:58:20.694815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.842 [2024-12-05 13:58:20.694843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.842 [2024-12-05 13:58:20.694873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.842 [2024-12-05 13:58:20.694887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.694901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.694915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.694928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.694943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.843 [2024-12-05 13:58:20.694957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.694971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.694984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.694999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.695977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.695990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.843 [2024-12-05 13:58:20.696005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.843 [2024-12-05 13:58:20.696018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.696979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.696993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.697007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.697020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.697035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.697049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.697064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.697077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.697091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.697104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.697119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.844 [2024-12-05 13:58:20.697133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.844 [2024-12-05 13:58:20.697148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.845 [2024-12-05 13:58:20.697742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.845 [2024-12-05 13:58:20.697937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.697951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59ddb0 is same with the state(6) to be set 00:26:55.845 [2024-12-05 13:58:20.697967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.845 [2024-12-05 13:58:20.697985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.845 [2024-12-05 13:58:20.697997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29448 len:8 PRP1 0x0 PRP2 0x0 00:26:55.845 [2024-12-05 13:58:20.698010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.698071] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:55.845 [2024-12-05 13:58:20.698109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.845 [2024-12-05 13:58:20.698127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.698149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.845 [2024-12-05 13:58:20.698172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.698187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.845 [2024-12-05 13:58:20.698200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.698213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.845 [2024-12-05 13:58:20.698226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.845 [2024-12-05 13:58:20.698239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:55.845 [2024-12-05 13:58:20.698291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56e570 (9): Bad file descriptor 00:26:55.845 [2024-12-05 13:58:20.701605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:55.845 [2024-12-05 13:58:20.726283] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:55.845 8532.10 IOPS, 33.33 MiB/s [2024-12-05T12:58:27.371Z] 8552.36 IOPS, 33.41 MiB/s [2024-12-05T12:58:27.371Z] 8571.67 IOPS, 33.48 MiB/s [2024-12-05T12:58:27.371Z] 8584.92 IOPS, 33.53 MiB/s [2024-12-05T12:58:27.371Z] 8596.93 IOPS, 33.58 MiB/s [2024-12-05T12:58:27.371Z] 8600.87 IOPS, 33.60 MiB/s 00:26:55.845 Latency(us) 00:26:55.845 [2024-12-05T12:58:27.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.845 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:55.845 Verification LBA range: start 0x0 length 0x4000 00:26:55.845 NVMe0n1 : 15.05 8575.30 33.50 356.53 0.00 14265.00 600.75 43884.85 00:26:55.845 [2024-12-05T12:58:27.371Z] =================================================================================================================== 00:26:55.845 [2024-12-05T12:58:27.371Z] Total : 8575.30 33.50 356.53 0.00 14265.00 600.75 43884.85 00:26:55.845 Received shutdown signal, test time was about 15.000000 seconds 00:26:55.845 00:26:55.846 Latency(us) 00:26:55.846 [2024-12-05T12:58:27.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.846 [2024-12-05T12:58:27.372Z] =================================================================================================================== 00:26:55.846 [2024-12-05T12:58:27.372Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2315935 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2315935 /var/tmp/bdevperf.sock 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2315935 ']' 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:55.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:55.846 13:58:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:55.846 [2024-12-05 13:58:27.055203] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:55.846 13:58:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:56.104 [2024-12-05 13:58:27.360097] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:56.104 13:58:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:56.361 NVMe0n1 00:26:56.361 13:58:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:56.929 00:26:56.929 13:58:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:57.187 00:26:57.187 13:58:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:57.187 13:58:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:57.446 13:58:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:57.703 13:58:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:00.995 13:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:00.995 13:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:00.995 13:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2316700 00:27:00.995 13:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:00.995 13:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2316700 00:27:02.373 { 00:27:02.373 "results": [ 00:27:02.373 { 00:27:02.373 "job": "NVMe0n1", 00:27:02.373 "core_mask": "0x1", 00:27:02.373 "workload": "verify", 00:27:02.373 "status": "finished", 00:27:02.373 "verify_range": { 00:27:02.373 "start": 0, 00:27:02.373 "length": 16384 00:27:02.373 }, 00:27:02.373 "queue_depth": 128, 00:27:02.373 "io_size": 4096, 00:27:02.373 "runtime": 1.0098, 00:27:02.373 "iops": 8687.85898197663, 00:27:02.373 "mibps": 33.93694914834621, 00:27:02.373 "io_failed": 0, 00:27:02.373 "io_timeout": 0, 00:27:02.373 "avg_latency_us": 14671.598058352436, 00:27:02.373 "min_latency_us": 1328.9244444444444, 00:27:02.373 "max_latency_us": 15146.097777777777 00:27:02.373 } 00:27:02.373 ], 00:27:02.373 "core_count": 1 00:27:02.373 } 00:27:02.373 13:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:02.373 [2024-12-05 13:58:26.574374] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:27:02.373 [2024-12-05 13:58:26.574508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315935 ] 00:27:02.373 [2024-12-05 13:58:26.646737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.373 [2024-12-05 13:58:26.701394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.373 [2024-12-05 13:58:29.110216] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:02.373 [2024-12-05 13:58:29.110298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.373 [2024-12-05 13:58:29.110321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.373 [2024-12-05 13:58:29.110352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.373 [2024-12-05 13:58:29.110367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.373 [2024-12-05 13:58:29.110382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.373 [2024-12-05 13:58:29.110396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.373 [2024-12-05 13:58:29.110410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.373 [2024-12-05 13:58:29.110434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.373 [2024-12-05 13:58:29.110450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:27:02.373 [2024-12-05 13:58:29.110497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:27:02.373 [2024-12-05 13:58:29.110529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b5570 (9): Bad file descriptor 00:27:02.373 [2024-12-05 13:58:29.121088] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:27:02.373 Running I/O for 1 seconds... 00:27:02.373 8643.00 IOPS, 33.76 MiB/s 00:27:02.373 Latency(us) 00:27:02.373 [2024-12-05T12:58:33.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.373 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:02.373 Verification LBA range: start 0x0 length 0x4000 00:27:02.373 NVMe0n1 : 1.01 8687.86 33.94 0.00 0.00 14671.60 1328.92 15146.10 00:27:02.373 [2024-12-05T12:58:33.899Z] =================================================================================================================== 00:27:02.373 [2024-12-05T12:58:33.899Z] Total : 8687.86 33.94 0.00 0.00 14671.60 1328.92 15146.10 00:27:02.373 13:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:02.373 13:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:02.631 13:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:02.889 13:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:02.889 13:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:03.147 13:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:03.407 13:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:06.696 13:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:06.696 13:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:06.696 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2315935 00:27:06.696 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2315935 ']' 00:27:06.696 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2315935 00:27:06.696 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:06.696 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:06.696 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2315935 00:27:06.696 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:06.696 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:06.696 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2315935' 00:27:06.696 killing process with pid 2315935 00:27:06.696 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2315935 00:27:06.696 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2315935 00:27:06.953 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:06.953 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:07.211 rmmod nvme_tcp 00:27:07.211 rmmod nvme_fabrics 00:27:07.211 rmmod nvme_keyring 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2313747 ']' 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2313747 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2313747 ']' 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2313747 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.211 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2313747 00:27:07.212 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:07.212 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:07.212 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2313747' 00:27:07.212 killing process with pid 2313747 00:27:07.212 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2313747 00:27:07.212 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2313747 00:27:07.470 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:07.470 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:07.470 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:07.470 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:07.470 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:27:07.470 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:07.470 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:27:07.470 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:07.470 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:07.470 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.470 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.470 13:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.005 13:58:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.005 00:27:10.005 real 0m35.986s 00:27:10.005 user 2m7.474s 00:27:10.005 sys 0m5.819s 00:27:10.005 13:58:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.005 13:58:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:10.005 ************************************ 00:27:10.005 END TEST nvmf_failover 00:27:10.005 ************************************ 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.005 ************************************ 00:27:10.005 START TEST nvmf_host_discovery 00:27:10.005 ************************************ 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:10.005 * Looking for test storage... 00:27:10.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:27:10.005 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:10.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.006 --rc genhtml_branch_coverage=1 00:27:10.006 --rc genhtml_function_coverage=1 00:27:10.006 --rc genhtml_legend=1 00:27:10.006 --rc geninfo_all_blocks=1 00:27:10.006 --rc geninfo_unexecuted_blocks=1 00:27:10.006 00:27:10.006 ' 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:10.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.006 --rc genhtml_branch_coverage=1 00:27:10.006 --rc genhtml_function_coverage=1 00:27:10.006 --rc genhtml_legend=1 00:27:10.006 --rc geninfo_all_blocks=1 00:27:10.006 --rc geninfo_unexecuted_blocks=1 00:27:10.006 00:27:10.006 ' 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:10.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.006 --rc genhtml_branch_coverage=1 00:27:10.006 --rc genhtml_function_coverage=1 00:27:10.006 --rc genhtml_legend=1 00:27:10.006 --rc geninfo_all_blocks=1 00:27:10.006 --rc geninfo_unexecuted_blocks=1 00:27:10.006 00:27:10.006 ' 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:10.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.006 --rc genhtml_branch_coverage=1 00:27:10.006 --rc genhtml_function_coverage=1 00:27:10.006 --rc genhtml_legend=1 00:27:10.006 --rc geninfo_all_blocks=1 00:27:10.006 --rc geninfo_unexecuted_blocks=1 00:27:10.006 00:27:10.006 ' 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:10.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:27:10.006 13:58:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.913 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:11.914 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:11.914 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:11.914 Found net devices under 0000:09:00.0: cvl_0_0 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:11.914 Found net devices under 0000:09:00.1: cvl_0_1 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:11.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:27:11.914 00:27:11.914 --- 10.0.0.2 ping statistics --- 00:27:11.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.914 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:27:11.914 00:27:11.914 --- 10.0.0.1 ping statistics --- 00:27:11.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.914 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:11.914 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:12.173 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:12.173 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:12.173 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:12.173 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.173 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2319316 00:27:12.173 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:12.173 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2319316 00:27:12.173 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2319316 ']' 00:27:12.173 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.173 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:12.173 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.173 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:12.173 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.173 [2024-12-05 13:58:43.507109] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:27:12.173 [2024-12-05 13:58:43.507210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.173 [2024-12-05 13:58:43.579091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.173 [2024-12-05 13:58:43.633265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.173 [2024-12-05 13:58:43.633317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.173 [2024-12-05 13:58:43.633344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.173 [2024-12-05 13:58:43.633355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.173 [2024-12-05 13:58:43.633365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.173 [2024-12-05 13:58:43.634056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.432 [2024-12-05 13:58:43.769220] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.432 [2024-12-05 13:58:43.777415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.432 null0 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.432 null1 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2319453 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2319453 /tmp/host.sock 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2319453 ']' 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:12.432 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:12.432 13:58:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.432 [2024-12-05 13:58:43.850543] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:27:12.432 [2024-12-05 13:58:43.850626] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319453 ] 00:27:12.432 [2024-12-05 13:58:43.914785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.690 [2024-12-05 13:58:43.970099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.690 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:12.690 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:12.690 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:12.690 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:12.690 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.690 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.690 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:12.691 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:12.949 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.950 [2024-12-05 13:58:44.435132] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:12.950 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:13.209 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:27:13.210 13:58:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:13.868 [2024-12-05 13:58:45.197013] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:13.868 [2024-12-05 13:58:45.197038] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:13.868 [2024-12-05 13:58:45.197059] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:13.868 [2024-12-05 13:58:45.283337] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:13.868 [2024-12-05 13:58:45.345017] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:13.868 [2024-12-05 13:58:45.345998] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2026e20:1 started. 00:27:13.868 [2024-12-05 13:58:45.347800] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:13.868 [2024-12-05 13:58:45.347820] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:13.868 [2024-12-05 13:58:45.354961] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2026e20 was disconnected and freed. delete nvme_qpair. 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:14.126 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:14.385 13:58:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:14.644 [2024-12-05 13:58:46.025377] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2027000:1 started. 00:27:14.644 [2024-12-05 13:58:46.028479] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2027000 was disconnected and freed. delete nvme_qpair. 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.644 [2024-12-05 13:58:46.096273] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:14.644 [2024-12-05 13:58:46.096966] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:14.644 [2024-12-05 13:58:46.096999] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:14.644 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:14.645 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:14.645 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:14.645 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.645 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.645 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:14.645 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.645 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:14.645 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.904 [2024-12-05 13:58:46.182614] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:14.904 13:58:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:15.164 [2024-12-05 13:58:46.483091] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:27:15.164 [2024-12-05 13:58:46.483138] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:15.164 [2024-12-05 13:58:46.483153] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:15.164 [2024-12-05 13:58:46.483161] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:15.731 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:15.731 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:15.731 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:15.731 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:15.731 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:15.731 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.731 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.731 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:15.731 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:15.731 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.993 [2024-12-05 13:58:47.328551] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:15.993 [2024-12-05 13:58:47.328592] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:15.993 [2024-12-05 13:58:47.329998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.993 [2024-12-05 13:58:47.330050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.993 [2024-12-05 13:58:47.330069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.993 [2024-12-05 13:58:47.330082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.993 [2024-12-05 13:58:47.330111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.993 [2024-12-05 13:58:47.330124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.993 [2024-12-05 13:58:47.330138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.993 [2024-12-05 13:58:47.330151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.993 [2024-12-05 13:58:47.330164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff73d0 is same with the state(6) to be set 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:15.993 [2024-12-05 13:58:47.339989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff73d0 (9): Bad file descriptor 00:27:15.993 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.993 [2024-12-05 13:58:47.350029] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:15.993 [2024-12-05 13:58:47.350051] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:15.993 [2024-12-05 13:58:47.350065] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:15.993 [2024-12-05 13:58:47.350074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:15.993 [2024-12-05 13:58:47.350124] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:15.993 [2024-12-05 13:58:47.350381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.993 [2024-12-05 13:58:47.350411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff73d0 with addr=10.0.0.2, port=4420 00:27:15.993 [2024-12-05 13:58:47.350438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff73d0 is same with the state(6) to be set 00:27:15.993 [2024-12-05 13:58:47.350468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff73d0 (9): Bad file descriptor 00:27:15.993 [2024-12-05 13:58:47.350491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:15.993 [2024-12-05 13:58:47.350505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:15.993 [2024-12-05 13:58:47.350520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:15.993 [2024-12-05 13:58:47.350534] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:15.993 [2024-12-05 13:58:47.350545] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:15.993 [2024-12-05 13:58:47.350553] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:15.993 [2024-12-05 13:58:47.360156] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:15.993 [2024-12-05 13:58:47.360176] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:15.993 [2024-12-05 13:58:47.360185] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:15.993 [2024-12-05 13:58:47.360192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:15.993 [2024-12-05 13:58:47.360229] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:15.994 [2024-12-05 13:58:47.360392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.994 [2024-12-05 13:58:47.360427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff73d0 with addr=10.0.0.2, port=4420 00:27:15.994 [2024-12-05 13:58:47.360445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff73d0 is same with the state(6) to be set 00:27:15.994 [2024-12-05 13:58:47.360467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff73d0 (9): Bad file descriptor 00:27:15.994 [2024-12-05 13:58:47.360488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:15.994 [2024-12-05 13:58:47.360503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:15.994 [2024-12-05 13:58:47.360516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:15.994 [2024-12-05 13:58:47.360528] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:15.994 [2024-12-05 13:58:47.360538] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:15.994 [2024-12-05 13:58:47.360546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:15.994 [2024-12-05 13:58:47.370262] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:15.994 [2024-12-05 13:58:47.370282] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:15.994 [2024-12-05 13:58:47.370290] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:15.994 [2024-12-05 13:58:47.370298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:15.994 [2024-12-05 13:58:47.370336] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:15.994 [2024-12-05 13:58:47.370529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.994 [2024-12-05 13:58:47.370557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff73d0 with addr=10.0.0.2, port=4420 00:27:15.994 [2024-12-05 13:58:47.370579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff73d0 is same with the state(6) to be set 00:27:15.994 [2024-12-05 13:58:47.370603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff73d0 (9): Bad file descriptor 00:27:15.994 [2024-12-05 13:58:47.370624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:15.994 [2024-12-05 13:58:47.370638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:15.994 [2024-12-05 13:58:47.370652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:15.994 [2024-12-05 13:58:47.370665] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:15.994 [2024-12-05 13:58:47.370674] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:15.994 [2024-12-05 13:58:47.370682] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:15.994 [2024-12-05 13:58:47.380369] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:15.994 [2024-12-05 13:58:47.380392] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:15.994 [2024-12-05 13:58:47.380424] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:15.994 [2024-12-05 13:58:47.380434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:15.994 [2024-12-05 13:58:47.380482] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:15.994 [2024-12-05 13:58:47.380624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.994 [2024-12-05 13:58:47.380653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff73d0 with addr=10.0.0.2, port=4420 00:27:15.994 [2024-12-05 13:58:47.380680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff73d0 is same with the state(6) to be set 00:27:15.994 [2024-12-05 13:58:47.380703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff73d0 (9): Bad file descriptor 00:27:15.994 [2024-12-05 13:58:47.380724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:15.994 [2024-12-05 13:58:47.380747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:15.994 [2024-12-05 13:58:47.380761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:15.994 [2024-12-05 13:58:47.380774] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:15.994 [2024-12-05 13:58:47.380783] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:15.994 [2024-12-05 13:58:47.380791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:15.994 [2024-12-05 13:58:47.390516] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:15.994 [2024-12-05 13:58:47.390541] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:15.994 [2024-12-05 13:58:47.390551] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:15.994 [2024-12-05 13:58:47.390559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:15.994 [2024-12-05 13:58:47.390586] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:15.994 [2024-12-05 13:58:47.390746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.994 [2024-12-05 13:58:47.390774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff73d0 with addr=10.0.0.2, port=4420 00:27:15.994 [2024-12-05 13:58:47.390791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff73d0 is same with the state(6) to be set 00:27:15.994 [2024-12-05 13:58:47.390813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff73d0 (9): Bad file descriptor 00:27:15.994 [2024-12-05 13:58:47.390846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:15.994 [2024-12-05 13:58:47.390863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:15.994 [2024-12-05 13:58:47.390877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:15.994 [2024-12-05 13:58:47.390889] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:15.994 [2024-12-05 13:58:47.390898] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:15.994 [2024-12-05 13:58:47.390906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:15.994 [2024-12-05 13:58:47.400619] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:15.994 [2024-12-05 13:58:47.400641] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:15.994 [2024-12-05 13:58:47.400651] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:15.994 [2024-12-05 13:58:47.400659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:15.994 [2024-12-05 13:58:47.400684] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:15.994 [2024-12-05 13:58:47.400827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.994 [2024-12-05 13:58:47.400854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff73d0 with addr=10.0.0.2, port=4420 00:27:15.994 [2024-12-05 13:58:47.400870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff73d0 is same with the state(6) to be set 00:27:15.994 [2024-12-05 13:58:47.400892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff73d0 (9): Bad file descriptor 00:27:15.994 [2024-12-05 13:58:47.400913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:15.994 [2024-12-05 13:58:47.400932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:15.994 [2024-12-05 13:58:47.400946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:15.994 [2024-12-05 13:58:47.400958] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:15.994 [2024-12-05 13:58:47.400967] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:15.994 [2024-12-05 13:58:47.400975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:15.994 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.994 [2024-12-05 13:58:47.410718] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:15.994 [2024-12-05 13:58:47.410739] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:15.994 [2024-12-05 13:58:47.410748] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:15.994 [2024-12-05 13:58:47.410756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:15.994 [2024-12-05 13:58:47.410795] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:15.994 [2024-12-05 13:58:47.410965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.995 [2024-12-05 13:58:47.410992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff73d0 with addr=10.0.0.2, port=4420 00:27:15.995 [2024-12-05 13:58:47.411009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff73d0 is same with the state(6) to be set 00:27:15.995 [2024-12-05 13:58:47.411031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff73d0 (9): Bad file descriptor 00:27:15.995 [2024-12-05 13:58:47.411063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:15.995 [2024-12-05 13:58:47.411081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:15.995 [2024-12-05 13:58:47.411095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:15.995 [2024-12-05 13:58:47.411107] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:15.995 [2024-12-05 13:58:47.411117] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:15.995 [2024-12-05 13:58:47.411124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:15.995 [2024-12-05 13:58:47.414612] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:15.995 [2024-12-05 13:58:47.414642] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.995 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.256 13:58:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.196 [2024-12-05 13:58:48.675658] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:17.196 [2024-12-05 13:58:48.675687] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:17.196 [2024-12-05 13:58:48.675725] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:17.456 [2024-12-05 13:58:48.763014] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:17.716 [2024-12-05 13:58:49.071507] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:27:17.716 [2024-12-05 13:58:49.072574] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x215e150:1 started. 00:27:17.716 [2024-12-05 13:58:49.074747] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:17.716 [2024-12-05 13:58:49.074795] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:17.716 [2024-12-05 13:58:49.076409] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x215e150 was disconnected and freed. delete nvme_qpair. 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 request: 00:27:17.716 { 00:27:17.716 "name": "nvme", 00:27:17.716 "trtype": "tcp", 00:27:17.716 "traddr": "10.0.0.2", 00:27:17.716 "adrfam": "ipv4", 00:27:17.716 "trsvcid": "8009", 00:27:17.716 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:17.716 "wait_for_attach": true, 00:27:17.716 "method": "bdev_nvme_start_discovery", 00:27:17.716 "req_id": 1 00:27:17.716 } 00:27:17.716 Got JSON-RPC error response 00:27:17.716 response: 00:27:17.716 { 00:27:17.716 "code": -17, 00:27:17.716 "message": "File exists" 00:27:17.716 } 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 request: 00:27:17.716 { 00:27:17.716 "name": "nvme_second", 00:27:17.716 "trtype": "tcp", 00:27:17.716 "traddr": "10.0.0.2", 00:27:17.716 "adrfam": "ipv4", 00:27:17.716 "trsvcid": "8009", 00:27:17.716 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:17.716 "wait_for_attach": true, 00:27:17.716 "method": "bdev_nvme_start_discovery", 00:27:17.716 "req_id": 1 00:27:17.716 } 00:27:17.716 Got JSON-RPC error response 00:27:17.716 response: 00:27:17.716 { 00:27:17.716 "code": -17, 00:27:17.716 "message": "File exists" 00:27:17.716 } 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:17.716 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:17.976 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.976 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:17.976 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:17.976 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:17.976 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:17.976 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:17.976 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.976 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:17.976 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.976 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:17.976 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.976 13:58:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.915 [2024-12-05 13:58:50.278208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.915 [2024-12-05 13:58:50.278279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2005d50 with addr=10.0.0.2, port=8010 00:27:18.915 [2024-12-05 13:58:50.278310] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:18.915 [2024-12-05 13:58:50.278324] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:18.915 [2024-12-05 13:58:50.278345] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:19.850 [2024-12-05 13:58:51.280581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.850 [2024-12-05 13:58:51.280616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2005d50 with addr=10.0.0.2, port=8010 00:27:19.850 [2024-12-05 13:58:51.280638] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:19.851 [2024-12-05 13:58:51.280651] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:19.851 [2024-12-05 13:58:51.280663] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:20.788 [2024-12-05 13:58:52.282831] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:20.788 request: 00:27:20.788 { 00:27:20.788 "name": "nvme_second", 00:27:20.788 "trtype": "tcp", 00:27:20.788 "traddr": "10.0.0.2", 00:27:20.788 "adrfam": "ipv4", 00:27:20.788 "trsvcid": "8010", 00:27:20.788 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:20.788 "wait_for_attach": false, 00:27:20.788 "attach_timeout_ms": 3000, 00:27:20.788 "method": "bdev_nvme_start_discovery", 00:27:20.788 "req_id": 1 00:27:20.788 } 00:27:20.788 Got JSON-RPC error response 00:27:20.788 response: 00:27:20.788 { 00:27:20.788 "code": -110, 00:27:20.788 "message": "Connection timed out" 00:27:20.788 } 00:27:20.788 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:20.788 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:20.788 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:20.788 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:20.788 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:20.788 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:20.789 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:20.789 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:20.789 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.789 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.789 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:20.789 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:20.789 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2319453 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:21.047 rmmod nvme_tcp 00:27:21.047 rmmod nvme_fabrics 00:27:21.047 rmmod nvme_keyring 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2319316 ']' 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2319316 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2319316 ']' 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2319316 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2319316 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2319316' 00:27:21.047 killing process with pid 2319316 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2319316 00:27:21.047 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2319316 00:27:21.307 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:21.307 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:21.307 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:21.307 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:27:21.307 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:27:21.307 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:21.307 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:27:21.307 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:21.307 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:21.307 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.307 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.307 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.214 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:23.214 00:27:23.214 real 0m13.657s 00:27:23.214 user 0m19.875s 00:27:23.214 sys 0m2.905s 00:27:23.214 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.214 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.214 ************************************ 00:27:23.214 END TEST nvmf_host_discovery 00:27:23.214 ************************************ 00:27:23.214 13:58:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:23.214 13:58:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:23.214 13:58:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.214 13:58:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.473 ************************************ 00:27:23.473 START TEST nvmf_host_multipath_status 00:27:23.473 ************************************ 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:23.473 * Looking for test storage... 00:27:23.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:23.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.473 --rc genhtml_branch_coverage=1 00:27:23.473 --rc genhtml_function_coverage=1 00:27:23.473 --rc genhtml_legend=1 00:27:23.473 --rc geninfo_all_blocks=1 00:27:23.473 --rc geninfo_unexecuted_blocks=1 00:27:23.473 00:27:23.473 ' 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:23.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.473 --rc genhtml_branch_coverage=1 00:27:23.473 --rc genhtml_function_coverage=1 00:27:23.473 --rc genhtml_legend=1 00:27:23.473 --rc geninfo_all_blocks=1 00:27:23.473 --rc geninfo_unexecuted_blocks=1 00:27:23.473 00:27:23.473 ' 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:23.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.473 --rc genhtml_branch_coverage=1 00:27:23.473 --rc genhtml_function_coverage=1 00:27:23.473 --rc genhtml_legend=1 00:27:23.473 --rc geninfo_all_blocks=1 00:27:23.473 --rc geninfo_unexecuted_blocks=1 00:27:23.473 00:27:23.473 ' 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:23.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.473 --rc genhtml_branch_coverage=1 00:27:23.473 --rc genhtml_function_coverage=1 00:27:23.473 --rc genhtml_legend=1 00:27:23.473 --rc geninfo_all_blocks=1 00:27:23.473 --rc geninfo_unexecuted_blocks=1 00:27:23.473 00:27:23.473 ' 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.473 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:23.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:27:23.474 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:26.010 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:26.011 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:26.011 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:26.011 Found net devices under 0000:09:00.0: cvl_0_0 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:26.011 Found net devices under 0000:09:00.1: cvl_0_1 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:26.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:27:26.011 00:27:26.011 --- 10.0.0.2 ping statistics --- 00:27:26.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.011 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:27:26.011 00:27:26.011 --- 10.0.0.1 ping statistics --- 00:27:26.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.011 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2322596 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2322596 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2322596 ']' 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.011 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:26.011 [2024-12-05 13:58:57.267472] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:27:26.011 [2024-12-05 13:58:57.267566] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.011 [2024-12-05 13:58:57.338142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:26.011 [2024-12-05 13:58:57.389029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.011 [2024-12-05 13:58:57.389086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.011 [2024-12-05 13:58:57.389114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.011 [2024-12-05 13:58:57.389125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.011 [2024-12-05 13:58:57.389135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.012 [2024-12-05 13:58:57.390548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.012 [2024-12-05 13:58:57.390555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.012 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.012 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:26.012 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:26.012 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:26.012 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:26.012 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.012 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2322596 00:27:26.012 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:26.272 [2024-12-05 13:58:57.777341] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.530 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:26.788 Malloc0 00:27:26.788 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:27.047 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:27.305 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:27.562 [2024-12-05 13:58:58.898737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.562 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:27.820 [2024-12-05 13:58:59.163358] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:27.820 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2322783 00:27:27.820 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:27.820 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:27.820 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2322783 /var/tmp/bdevperf.sock 00:27:27.820 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2322783 ']' 00:27:27.820 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:27.820 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:27.820 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:27.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:27.820 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:27.820 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:28.078 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:28.078 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:28.078 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:28.335 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:28.903 Nvme0n1 00:27:28.903 13:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:29.473 Nvme0n1 00:27:29.473 13:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:29.473 13:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:31.374 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:31.374 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:31.631 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:31.892 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:32.829 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:32.829 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:32.829 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.829 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:33.089 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.089 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:33.089 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.089 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:33.347 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:33.347 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:33.347 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.347 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:33.913 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.913 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:33.913 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.913 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:33.913 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.913 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:33.913 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.913 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:34.171 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.171 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:34.433 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.433 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:34.721 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.721 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:34.721 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:34.978 13:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:35.237 13:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:36.173 13:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:36.173 13:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:36.173 13:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.173 13:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:36.431 13:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:36.431 13:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:36.431 13:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.431 13:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:36.688 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.688 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:36.688 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.688 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:36.947 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.947 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:36.947 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.947 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:37.205 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.205 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:37.205 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.205 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:37.463 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.463 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:37.463 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.463 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:37.721 13:59:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.721 13:59:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:37.721 13:59:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:37.980 13:59:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:38.239 13:59:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:39.618 13:59:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:39.618 13:59:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:39.618 13:59:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.618 13:59:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:39.618 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:39.618 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:39.618 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.618 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:39.876 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:39.876 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:39.876 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.876 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:40.135 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.135 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:40.135 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.135 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:40.393 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.393 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:40.393 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.393 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:40.651 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.651 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:40.651 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.651 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:40.910 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.910 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:40.910 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:41.481 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:41.481 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:42.859 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:42.859 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:42.859 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.859 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:42.859 13:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.859 13:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:42.859 13:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.859 13:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:43.117 13:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:43.117 13:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:43.117 13:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.117 13:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:43.376 13:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.376 13:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:43.376 13:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.376 13:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:43.635 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.635 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:43.635 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.635 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:43.893 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.893 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:43.893 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.893 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:44.150 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:44.150 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:44.150 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:44.407 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:44.665 13:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:46.043 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:46.043 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:46.043 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.043 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:46.043 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:46.043 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:46.043 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.043 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:46.300 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:46.300 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:46.300 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.300 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:46.557 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.557 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:46.557 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.557 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:46.814 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.814 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:46.814 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.814 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:47.072 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:47.072 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:47.072 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.072 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:47.329 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:47.329 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:47.329 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:47.586 13:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:47.844 13:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:49.234 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:49.234 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:49.234 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:49.234 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:49.234 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:49.234 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:49.234 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:49.234 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:49.492 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:49.492 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:49.492 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:49.492 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:49.750 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:49.750 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:49.750 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:49.750 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:50.009 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.009 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:50.009 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.009 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:50.267 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:50.267 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:50.267 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.267 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:50.833 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.833 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:50.833 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:50.833 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:51.091 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:51.660 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:52.594 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:52.594 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:52.594 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.594 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:52.852 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:52.852 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:52.852 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.852 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:53.110 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.110 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:53.110 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.110 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:53.368 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.368 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:53.368 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.368 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:53.626 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.626 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:53.626 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.626 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:53.883 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.883 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:53.883 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.883 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:54.140 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.140 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:54.140 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:54.398 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:54.657 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:55.682 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:55.682 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:55.682 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.682 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:55.940 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:55.940 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:55.940 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.940 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:56.199 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.199 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:56.199 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.199 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:56.457 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.457 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:56.457 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.457 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:56.716 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.716 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:56.716 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.716 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:56.974 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.974 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:56.974 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.974 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:57.232 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.232 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:57.232 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:57.490 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:58.057 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:58.991 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:58.991 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:58.991 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.991 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:59.249 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.249 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:59.249 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.249 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:59.507 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.507 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:59.507 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.507 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:59.764 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.764 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:59.764 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.764 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:00.022 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.022 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:00.023 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.023 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:00.280 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.280 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:00.280 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.280 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:00.538 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.538 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:00.538 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:00.796 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:01.056 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:02.435 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:02.435 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:02.435 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.435 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:02.435 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.435 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:02.435 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.435 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:02.693 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:02.693 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:02.693 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.693 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:02.950 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.950 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:02.950 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.950 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:03.207 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.207 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:03.207 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.207 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:03.465 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.465 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:03.465 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.465 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:03.724 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:03.724 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2322783 00:28:03.724 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2322783 ']' 00:28:03.724 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2322783 00:28:03.724 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:03.724 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:03.724 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2322783 00:28:03.724 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:03.724 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:03.724 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2322783' 00:28:03.724 killing process with pid 2322783 00:28:03.724 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2322783 00:28:03.724 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2322783 00:28:03.985 { 00:28:03.985 "results": [ 00:28:03.985 { 00:28:03.985 "job": "Nvme0n1", 00:28:03.985 "core_mask": "0x4", 00:28:03.985 "workload": "verify", 00:28:03.985 "status": "terminated", 00:28:03.985 "verify_range": { 00:28:03.985 "start": 0, 00:28:03.985 "length": 16384 00:28:03.985 }, 00:28:03.985 "queue_depth": 128, 00:28:03.985 "io_size": 4096, 00:28:03.985 "runtime": 34.29322, 00:28:03.985 "iops": 7987.818000176128, 00:28:03.985 "mibps": 31.202414063188, 00:28:03.985 "io_failed": 0, 00:28:03.985 "io_timeout": 0, 00:28:03.985 "avg_latency_us": 15997.634345310527, 00:28:03.985 "min_latency_us": 1104.4029629629629, 00:28:03.985 "max_latency_us": 4026531.84 00:28:03.985 } 00:28:03.985 ], 00:28:03.985 "core_count": 1 00:28:03.985 } 00:28:03.985 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2322783 00:28:03.985 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:03.985 [2024-12-05 13:58:59.228950] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:28:03.985 [2024-12-05 13:58:59.229035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322783 ] 00:28:03.985 [2024-12-05 13:58:59.296533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.985 [2024-12-05 13:58:59.353642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.985 Running I/O for 90 seconds... 00:28:03.985 8249.00 IOPS, 32.22 MiB/s [2024-12-05T12:59:35.511Z] 8390.50 IOPS, 32.78 MiB/s [2024-12-05T12:59:35.511Z] 8383.33 IOPS, 32.75 MiB/s [2024-12-05T12:59:35.511Z] 8455.50 IOPS, 33.03 MiB/s [2024-12-05T12:59:35.511Z] 8470.20 IOPS, 33.09 MiB/s [2024-12-05T12:59:35.511Z] 8477.67 IOPS, 33.12 MiB/s [2024-12-05T12:59:35.511Z] 8484.43 IOPS, 33.14 MiB/s [2024-12-05T12:59:35.511Z] 8490.62 IOPS, 33.17 MiB/s [2024-12-05T12:59:35.511Z] 8521.56 IOPS, 33.29 MiB/s [2024-12-05T12:59:35.511Z] 8501.10 IOPS, 33.21 MiB/s [2024-12-05T12:59:35.511Z] 8492.73 IOPS, 33.17 MiB/s [2024-12-05T12:59:35.511Z] 8486.83 IOPS, 33.15 MiB/s [2024-12-05T12:59:35.511Z] 8492.46 IOPS, 33.17 MiB/s [2024-12-05T12:59:35.511Z] 8481.14 IOPS, 33.13 MiB/s [2024-12-05T12:59:35.511Z] [2024-12-05 13:59:15.895778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.985 [2024-12-05 13:59:15.895831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:03.985 [2024-12-05 13:59:15.895906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.985 [2024-12-05 13:59:15.895927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:03.985 [2024-12-05 13:59:15.895951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.985 [2024-12-05 13:59:15.895968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:03.985 [2024-12-05 13:59:15.895990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.985 [2024-12-05 13:59:15.896006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:03.985 [2024-12-05 13:59:15.896028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.985 [2024-12-05 13:59:15.896044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:03.985 [2024-12-05 13:59:15.896065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.985 [2024-12-05 13:59:15.896082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:03.985 [2024-12-05 13:59:15.896105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.985 [2024-12-05 13:59:15.896120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.985 [2024-12-05 13:59:15.896142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.985 [2024-12-05 13:59:15.896158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.985 [2024-12-05 13:59:15.896642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.985 [2024-12-05 13:59:15.896667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.985 [2024-12-05 13:59:15.896703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.985 [2024-12-05 13:59:15.896733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.985 [2024-12-05 13:59:15.896758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.985 [2024-12-05 13:59:15.896774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.985 [2024-12-05 13:59:15.896797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.985 [2024-12-05 13:59:15.896813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.985 [2024-12-05 13:59:15.896836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.896853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.896890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.896907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.896928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.896944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.896966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.896981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.986 [2024-12-05 13:59:15.897356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.897973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.897997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.898962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.898986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.899002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.899026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.899042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.899067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.899082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.899106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.899122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.899146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.899162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.899187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.899202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.986 [2024-12-05 13:59:15.899226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.986 [2024-12-05 13:59:15.899242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.899298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.899340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.899382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.899438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.899483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.899525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.899567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.899609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.899651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.899692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.899748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.899789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.899828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.899871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.899988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.900008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.900056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.900109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.900152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.900195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.900239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.900282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.900963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.900979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.987 [2024-12-05 13:59:15.901365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.987 [2024-12-05 13:59:15.901805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:03.987 [2024-12-05 13:59:15.901832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.988 [2024-12-05 13:59:15.901848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:15.901874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.988 [2024-12-05 13:59:15.901891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:15.901917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.988 [2024-12-05 13:59:15.901933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:15.901959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.988 [2024-12-05 13:59:15.901975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:15.902001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.988 [2024-12-05 13:59:15.902017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:15.902043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.988 [2024-12-05 13:59:15.902059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:15.902086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.988 [2024-12-05 13:59:15.902103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:03.988 8467.13 IOPS, 33.07 MiB/s [2024-12-05T12:59:35.514Z] 7937.94 IOPS, 31.01 MiB/s [2024-12-05T12:59:35.514Z] 7471.00 IOPS, 29.18 MiB/s [2024-12-05T12:59:35.514Z] 7055.94 IOPS, 27.56 MiB/s [2024-12-05T12:59:35.514Z] 6692.11 IOPS, 26.14 MiB/s [2024-12-05T12:59:35.514Z] 6783.10 IOPS, 26.50 MiB/s [2024-12-05T12:59:35.514Z] 6865.38 IOPS, 26.82 MiB/s [2024-12-05T12:59:35.514Z] 6981.41 IOPS, 27.27 MiB/s [2024-12-05T12:59:35.514Z] 7163.74 IOPS, 27.98 MiB/s [2024-12-05T12:59:35.514Z] 7334.50 IOPS, 28.65 MiB/s [2024-12-05T12:59:35.514Z] 7478.72 IOPS, 29.21 MiB/s [2024-12-05T12:59:35.514Z] 7518.58 IOPS, 29.37 MiB/s [2024-12-05T12:59:35.514Z] 7559.85 IOPS, 29.53 MiB/s [2024-12-05T12:59:35.514Z] 7598.29 IOPS, 29.68 MiB/s [2024-12-05T12:59:35.514Z] 7681.21 IOPS, 30.00 MiB/s [2024-12-05T12:59:35.514Z] 7794.43 IOPS, 30.45 MiB/s [2024-12-05T12:59:35.514Z] 7902.65 IOPS, 30.87 MiB/s [2024-12-05T12:59:35.514Z] [2024-12-05 13:59:32.499310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.988 [2024-12-05 13:59:32.499378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.499453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.988 [2024-12-05 13:59:32.499487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.499522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.988 [2024-12-05 13:59:32.499540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.503686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.503715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.503745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.503764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.503788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.503805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.503828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.503844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.503866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.503883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.503905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.503921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.503944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.503960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.503982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.503999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.504022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.504038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.504061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.504077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.504099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.504116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.504144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.504162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.504184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.504201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.504224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.504249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.504272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.504289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.504311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.504328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.504351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.504367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.504389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.504404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.504439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.504457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.504485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.504502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.504524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.504540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.988 [2024-12-05 13:59:32.504562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.988 [2024-12-05 13:59:32.504579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.988 7960.91 IOPS, 31.10 MiB/s [2024-12-05T12:59:35.514Z] 7976.67 IOPS, 31.16 MiB/s [2024-12-05T12:59:35.514Z] 7986.76 IOPS, 31.20 MiB/s [2024-12-05T12:59:35.514Z] Received shutdown signal, test time was about 34.294107 seconds 00:28:03.988 00:28:03.988 Latency(us) 00:28:03.988 [2024-12-05T12:59:35.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.988 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:03.988 Verification LBA range: start 0x0 length 0x4000 00:28:03.988 Nvme0n1 : 34.29 7987.82 31.20 0.00 0.00 15997.63 1104.40 4026531.84 00:28:03.988 [2024-12-05T12:59:35.514Z] =================================================================================================================== 00:28:03.988 [2024-12-05T12:59:35.514Z] Total : 7987.82 31.20 0.00 0.00 15997.63 1104.40 4026531.84 00:28:03.988 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.245 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:04.245 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:04.245 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:04.245 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:04.245 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:28:04.245 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:04.245 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:28:04.245 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:04.245 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:04.245 rmmod nvme_tcp 00:28:04.505 rmmod nvme_fabrics 00:28:04.505 rmmod nvme_keyring 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2322596 ']' 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2322596 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2322596 ']' 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2322596 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2322596 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2322596' 00:28:04.505 killing process with pid 2322596 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2322596 00:28:04.505 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2322596 00:28:04.763 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:04.763 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:04.763 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:04.763 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:28:04.763 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:28:04.763 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:04.763 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:28:04.763 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:04.763 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:04.763 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.763 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.763 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.660 13:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:06.660 00:28:06.660 real 0m43.412s 00:28:06.660 user 2m12.299s 00:28:06.660 sys 0m10.564s 00:28:06.660 13:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:06.660 13:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:06.660 ************************************ 00:28:06.660 END TEST nvmf_host_multipath_status 00:28:06.660 ************************************ 00:28:06.660 13:59:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:06.660 13:59:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:06.660 13:59:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:06.660 13:59:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.918 ************************************ 00:28:06.918 START TEST nvmf_discovery_remove_ifc 00:28:06.918 ************************************ 00:28:06.918 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:06.918 * Looking for test storage... 00:28:06.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:06.918 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:06.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.919 --rc genhtml_branch_coverage=1 00:28:06.919 --rc genhtml_function_coverage=1 00:28:06.919 --rc genhtml_legend=1 00:28:06.919 --rc geninfo_all_blocks=1 00:28:06.919 --rc geninfo_unexecuted_blocks=1 00:28:06.919 00:28:06.919 ' 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:06.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.919 --rc genhtml_branch_coverage=1 00:28:06.919 --rc genhtml_function_coverage=1 00:28:06.919 --rc genhtml_legend=1 00:28:06.919 --rc geninfo_all_blocks=1 00:28:06.919 --rc geninfo_unexecuted_blocks=1 00:28:06.919 00:28:06.919 ' 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:06.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.919 --rc genhtml_branch_coverage=1 00:28:06.919 --rc genhtml_function_coverage=1 00:28:06.919 --rc genhtml_legend=1 00:28:06.919 --rc geninfo_all_blocks=1 00:28:06.919 --rc geninfo_unexecuted_blocks=1 00:28:06.919 00:28:06.919 ' 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:06.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.919 --rc genhtml_branch_coverage=1 00:28:06.919 --rc genhtml_function_coverage=1 00:28:06.919 --rc genhtml_legend=1 00:28:06.919 --rc geninfo_all_blocks=1 00:28:06.919 --rc geninfo_unexecuted_blocks=1 00:28:06.919 00:28:06.919 ' 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:06.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:06.919 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:28:06.920 13:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:08.824 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:08.824 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:08.824 Found net devices under 0000:09:00.0: cvl_0_0 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:08.824 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:08.825 Found net devices under 0000:09:00.1: cvl_0_1 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.825 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:09.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:28:09.083 00:28:09.083 --- 10.0.0.2 ping statistics --- 00:28:09.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.083 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:28:09.083 00:28:09.083 --- 10.0.0.1 ping statistics --- 00:28:09.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.083 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2329257 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2329257 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2329257 ']' 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.083 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:09.083 [2024-12-05 13:59:40.538650] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:28:09.083 [2024-12-05 13:59:40.538748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.342 [2024-12-05 13:59:40.609168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.342 [2024-12-05 13:59:40.660890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.342 [2024-12-05 13:59:40.660946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.342 [2024-12-05 13:59:40.660974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.342 [2024-12-05 13:59:40.660986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.342 [2024-12-05 13:59:40.660995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.342 [2024-12-05 13:59:40.661600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.342 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:09.342 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:09.342 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:09.343 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:09.343 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:09.343 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.343 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:09.343 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.343 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:09.343 [2024-12-05 13:59:40.808314] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.343 [2024-12-05 13:59:40.816554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:09.343 null0 00:28:09.343 [2024-12-05 13:59:40.848466] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.601 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.601 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2329286 00:28:09.601 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:09.601 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2329286 /tmp/host.sock 00:28:09.601 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2329286 ']' 00:28:09.601 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:09.601 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.601 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:09.601 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:09.601 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.601 13:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:09.601 [2024-12-05 13:59:40.913887] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:28:09.601 [2024-12-05 13:59:40.913966] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329286 ] 00:28:09.601 [2024-12-05 13:59:40.979074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.601 [2024-12-05 13:59:41.034839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.861 13:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:09.861 13:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:09.861 13:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:09.861 13:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:09.861 13:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.861 13:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:09.861 13:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.861 13:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:09.861 13:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.861 13:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:09.861 13:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.861 13:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:09.861 13:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.861 13:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:11.236 [2024-12-05 13:59:42.356601] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:11.236 [2024-12-05 13:59:42.356637] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:11.236 [2024-12-05 13:59:42.356672] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:11.236 [2024-12-05 13:59:42.442970] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:11.236 [2024-12-05 13:59:42.504698] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:11.236 [2024-12-05 13:59:42.505747] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a52a20:1 started. 00:28:11.236 [2024-12-05 13:59:42.507444] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:11.236 [2024-12-05 13:59:42.507509] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:11.236 [2024-12-05 13:59:42.507555] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:11.236 [2024-12-05 13:59:42.507580] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:11.236 [2024-12-05 13:59:42.507608] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:11.236 [2024-12-05 13:59:42.555152] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a52a20 was disconnected and freed. delete nvme_qpair. 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:11.236 13:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:12.176 13:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:12.176 13:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.176 13:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:12.176 13:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.176 13:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:12.176 13:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:12.176 13:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:12.176 13:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.176 13:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:12.176 13:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:13.557 13:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:13.557 13:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:13.557 13:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:13.557 13:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.557 13:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:13.557 13:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:13.557 13:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:13.557 13:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.557 13:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:13.557 13:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:14.496 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:14.496 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:14.496 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:14.496 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.496 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:14.496 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.496 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:14.496 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.496 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:14.496 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:15.435 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:15.435 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.435 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:15.435 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.435 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:15.435 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:15.435 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:15.435 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.435 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:15.435 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:16.376 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:16.376 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:16.376 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:16.376 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.376 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:16.376 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:16.376 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:16.376 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.376 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:16.376 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:16.637 [2024-12-05 13:59:47.949131] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:16.637 [2024-12-05 13:59:47.949205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.637 [2024-12-05 13:59:47.949227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.637 [2024-12-05 13:59:47.949245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.637 [2024-12-05 13:59:47.949258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.637 [2024-12-05 13:59:47.949272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.637 [2024-12-05 13:59:47.949284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.637 [2024-12-05 13:59:47.949298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.637 [2024-12-05 13:59:47.949310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.637 [2024-12-05 13:59:47.949324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.637 [2024-12-05 13:59:47.949337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.637 [2024-12-05 13:59:47.949349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2f250 is same with the state(6) to be set 00:28:16.637 [2024-12-05 13:59:47.959149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2f250 (9): Bad file descriptor 00:28:16.637 [2024-12-05 13:59:47.969208] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:16.637 [2024-12-05 13:59:47.969229] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:16.637 [2024-12-05 13:59:47.969243] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:16.637 [2024-12-05 13:59:47.969252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:16.637 [2024-12-05 13:59:47.969312] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:17.575 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:17.575 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.575 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:17.575 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.575 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:17.575 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:17.575 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:17.575 [2024-12-05 13:59:49.014450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:17.575 [2024-12-05 13:59:49.014515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2f250 with addr=10.0.0.2, port=4420 00:28:17.575 [2024-12-05 13:59:49.014536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2f250 is same with the state(6) to be set 00:28:17.575 [2024-12-05 13:59:49.014565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2f250 (9): Bad file descriptor 00:28:17.575 [2024-12-05 13:59:49.014941] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:28:17.575 [2024-12-05 13:59:49.014978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:17.575 [2024-12-05 13:59:49.014994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:17.575 [2024-12-05 13:59:49.015009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:17.575 [2024-12-05 13:59:49.015022] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:17.575 [2024-12-05 13:59:49.015033] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:17.575 [2024-12-05 13:59:49.015041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:17.575 [2024-12-05 13:59:49.015053] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:17.575 [2024-12-05 13:59:49.015061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:17.575 13:59:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.575 13:59:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:17.576 13:59:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:18.511 [2024-12-05 13:59:50.017548] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:18.511 [2024-12-05 13:59:50.017580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:18.511 [2024-12-05 13:59:50.017601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:18.511 [2024-12-05 13:59:50.017631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:18.511 [2024-12-05 13:59:50.017644] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:28:18.511 [2024-12-05 13:59:50.017658] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:18.511 [2024-12-05 13:59:50.017669] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:18.511 [2024-12-05 13:59:50.017677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:18.511 [2024-12-05 13:59:50.017749] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:18.511 [2024-12-05 13:59:50.017788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.511 [2024-12-05 13:59:50.017825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.511 [2024-12-05 13:59:50.017844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.511 [2024-12-05 13:59:50.017858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.511 [2024-12-05 13:59:50.017872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.511 [2024-12-05 13:59:50.017886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.511 [2024-12-05 13:59:50.017900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.511 [2024-12-05 13:59:50.017913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.511 [2024-12-05 13:59:50.017927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.511 [2024-12-05 13:59:50.017940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.511 [2024-12-05 13:59:50.017953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:28:18.511 [2024-12-05 13:59:50.018006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1e9a0 (9): Bad file descriptor 00:28:18.511 [2024-12-05 13:59:50.018993] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:18.511 [2024-12-05 13:59:50.019015] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:28:18.511 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:18.511 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.511 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:18.511 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:18.511 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.511 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:18.511 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:18.769 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:19.704 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:19.704 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:19.704 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:19.704 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.704 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:19.704 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:19.704 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:19.704 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.704 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:19.704 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:20.638 [2024-12-05 13:59:52.072110] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:20.638 [2024-12-05 13:59:52.072136] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:20.638 [2024-12-05 13:59:52.072158] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:20.896 [2024-12-05 13:59:52.199570] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:20.896 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:20.896 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:20.896 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:20.896 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:20.896 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.896 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:20.896 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:20.896 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.896 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:20.896 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:20.896 [2024-12-05 13:59:52.382598] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:28:20.896 [2024-12-05 13:59:52.383339] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1a31730:1 started. 00:28:20.896 [2024-12-05 13:59:52.384676] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:20.896 [2024-12-05 13:59:52.384719] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:20.896 [2024-12-05 13:59:52.384776] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:20.896 [2024-12-05 13:59:52.384800] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:20.896 [2024-12-05 13:59:52.384813] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:20.896 [2024-12-05 13:59:52.390603] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1a31730 was disconnected and freed. delete nvme_qpair. 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2329286 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2329286 ']' 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2329286 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2329286 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2329286' 00:28:21.830 killing process with pid 2329286 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2329286 00:28:21.830 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2329286 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.088 rmmod nvme_tcp 00:28:22.088 rmmod nvme_fabrics 00:28:22.088 rmmod nvme_keyring 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2329257 ']' 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2329257 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2329257 ']' 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2329257 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.088 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2329257 00:28:22.348 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2329257' 00:28:22.349 killing process with pid 2329257 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2329257 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2329257 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.349 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.885 13:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:24.885 00:28:24.885 real 0m17.695s 00:28:24.885 user 0m25.779s 00:28:24.885 sys 0m2.983s 00:28:24.885 13:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.885 13:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:24.885 ************************************ 00:28:24.885 END TEST nvmf_discovery_remove_ifc 00:28:24.885 ************************************ 00:28:24.886 13:59:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:24.886 13:59:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:24.886 13:59:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.886 13:59:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.886 ************************************ 00:28:24.886 START TEST nvmf_identify_kernel_target 00:28:24.886 ************************************ 00:28:24.886 13:59:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:24.886 * Looking for test storage... 00:28:24.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:24.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.886 --rc genhtml_branch_coverage=1 00:28:24.886 --rc genhtml_function_coverage=1 00:28:24.886 --rc genhtml_legend=1 00:28:24.886 --rc geninfo_all_blocks=1 00:28:24.886 --rc geninfo_unexecuted_blocks=1 00:28:24.886 00:28:24.886 ' 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:24.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.886 --rc genhtml_branch_coverage=1 00:28:24.886 --rc genhtml_function_coverage=1 00:28:24.886 --rc genhtml_legend=1 00:28:24.886 --rc geninfo_all_blocks=1 00:28:24.886 --rc geninfo_unexecuted_blocks=1 00:28:24.886 00:28:24.886 ' 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:24.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.886 --rc genhtml_branch_coverage=1 00:28:24.886 --rc genhtml_function_coverage=1 00:28:24.886 --rc genhtml_legend=1 00:28:24.886 --rc geninfo_all_blocks=1 00:28:24.886 --rc geninfo_unexecuted_blocks=1 00:28:24.886 00:28:24.886 ' 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:24.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.886 --rc genhtml_branch_coverage=1 00:28:24.886 --rc genhtml_function_coverage=1 00:28:24.886 --rc genhtml_legend=1 00:28:24.886 --rc geninfo_all_blocks=1 00:28:24.886 --rc geninfo_unexecuted_blocks=1 00:28:24.886 00:28:24.886 ' 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:24.886 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:24.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:24.887 13:59:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:27.418 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:27.418 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:27.418 Found net devices under 0000:09:00.0: cvl_0_0 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:27.418 Found net devices under 0000:09:00.1: cvl_0_1 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:27.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:28:27.418 00:28:27.418 --- 10.0.0.2 ping statistics --- 00:28:27.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.418 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:28:27.418 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:28:27.418 00:28:27.418 --- 10.0.0.1 ping statistics --- 00:28:27.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.419 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:27.419 13:59:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:28.354 Waiting for block devices as requested 00:28:28.354 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:28.354 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:28.613 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:28.613 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:28.613 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:28.613 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:28.877 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:28.878 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:28.878 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:28:29.136 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:29.136 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:29.136 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:29.136 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:29.394 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:29.394 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:29.394 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:29.394 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:29.670 No valid GPT data, bailing 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:29.670 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:28:29.930 00:28:29.930 Discovery Log Number of Records 2, Generation counter 2 00:28:29.930 =====Discovery Log Entry 0====== 00:28:29.930 trtype: tcp 00:28:29.930 adrfam: ipv4 00:28:29.930 subtype: current discovery subsystem 00:28:29.930 treq: not specified, sq flow control disable supported 00:28:29.930 portid: 1 00:28:29.930 trsvcid: 4420 00:28:29.930 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:29.930 traddr: 10.0.0.1 00:28:29.930 eflags: none 00:28:29.930 sectype: none 00:28:29.930 =====Discovery Log Entry 1====== 00:28:29.930 trtype: tcp 00:28:29.930 adrfam: ipv4 00:28:29.930 subtype: nvme subsystem 00:28:29.930 treq: not specified, sq flow control disable supported 00:28:29.930 portid: 1 00:28:29.930 trsvcid: 4420 00:28:29.930 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:29.930 traddr: 10.0.0.1 00:28:29.930 eflags: none 00:28:29.930 sectype: none 00:28:29.930 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:29.930 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:29.930 ===================================================== 00:28:29.930 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:29.930 ===================================================== 00:28:29.930 Controller Capabilities/Features 00:28:29.930 ================================ 00:28:29.930 Vendor ID: 0000 00:28:29.930 Subsystem Vendor ID: 0000 00:28:29.930 Serial Number: 0fea13ebc00db2f6e6a8 00:28:29.930 Model Number: Linux 00:28:29.930 Firmware Version: 6.8.9-20 00:28:29.930 Recommended Arb Burst: 0 00:28:29.930 IEEE OUI Identifier: 00 00 00 00:28:29.930 Multi-path I/O 00:28:29.930 May have multiple subsystem ports: No 00:28:29.930 May have multiple controllers: No 00:28:29.930 Associated with SR-IOV VF: No 00:28:29.930 Max Data Transfer Size: Unlimited 00:28:29.930 Max Number of Namespaces: 0 00:28:29.930 Max Number of I/O Queues: 1024 00:28:29.930 NVMe Specification Version (VS): 1.3 00:28:29.931 NVMe Specification Version (Identify): 1.3 00:28:29.931 Maximum Queue Entries: 1024 00:28:29.931 Contiguous Queues Required: No 00:28:29.931 Arbitration Mechanisms Supported 00:28:29.931 Weighted Round Robin: Not Supported 00:28:29.931 Vendor Specific: Not Supported 00:28:29.931 Reset Timeout: 7500 ms 00:28:29.931 Doorbell Stride: 4 bytes 00:28:29.931 NVM Subsystem Reset: Not Supported 00:28:29.931 Command Sets Supported 00:28:29.931 NVM Command Set: Supported 00:28:29.931 Boot Partition: Not Supported 00:28:29.931 Memory Page Size Minimum: 4096 bytes 00:28:29.931 Memory Page Size Maximum: 4096 bytes 00:28:29.931 Persistent Memory Region: Not Supported 00:28:29.931 Optional Asynchronous Events Supported 00:28:29.931 Namespace Attribute Notices: Not Supported 00:28:29.931 Firmware Activation Notices: Not Supported 00:28:29.931 ANA Change Notices: Not Supported 00:28:29.931 PLE Aggregate Log Change Notices: Not Supported 00:28:29.931 LBA Status Info Alert Notices: Not Supported 00:28:29.931 EGE Aggregate Log Change Notices: Not Supported 00:28:29.931 Normal NVM Subsystem Shutdown event: Not Supported 00:28:29.931 Zone Descriptor Change Notices: Not Supported 00:28:29.931 Discovery Log Change Notices: Supported 00:28:29.931 Controller Attributes 00:28:29.931 128-bit Host Identifier: Not Supported 00:28:29.931 Non-Operational Permissive Mode: Not Supported 00:28:29.931 NVM Sets: Not Supported 00:28:29.931 Read Recovery Levels: Not Supported 00:28:29.931 Endurance Groups: Not Supported 00:28:29.931 Predictable Latency Mode: Not Supported 00:28:29.931 Traffic Based Keep ALive: Not Supported 00:28:29.931 Namespace Granularity: Not Supported 00:28:29.931 SQ Associations: Not Supported 00:28:29.931 UUID List: Not Supported 00:28:29.931 Multi-Domain Subsystem: Not Supported 00:28:29.931 Fixed Capacity Management: Not Supported 00:28:29.931 Variable Capacity Management: Not Supported 00:28:29.931 Delete Endurance Group: Not Supported 00:28:29.931 Delete NVM Set: Not Supported 00:28:29.931 Extended LBA Formats Supported: Not Supported 00:28:29.931 Flexible Data Placement Supported: Not Supported 00:28:29.931 00:28:29.931 Controller Memory Buffer Support 00:28:29.931 ================================ 00:28:29.931 Supported: No 00:28:29.931 00:28:29.931 Persistent Memory Region Support 00:28:29.931 ================================ 00:28:29.931 Supported: No 00:28:29.931 00:28:29.931 Admin Command Set Attributes 00:28:29.931 ============================ 00:28:29.931 Security Send/Receive: Not Supported 00:28:29.931 Format NVM: Not Supported 00:28:29.931 Firmware Activate/Download: Not Supported 00:28:29.931 Namespace Management: Not Supported 00:28:29.931 Device Self-Test: Not Supported 00:28:29.931 Directives: Not Supported 00:28:29.931 NVMe-MI: Not Supported 00:28:29.931 Virtualization Management: Not Supported 00:28:29.931 Doorbell Buffer Config: Not Supported 00:28:29.931 Get LBA Status Capability: Not Supported 00:28:29.931 Command & Feature Lockdown Capability: Not Supported 00:28:29.931 Abort Command Limit: 1 00:28:29.931 Async Event Request Limit: 1 00:28:29.931 Number of Firmware Slots: N/A 00:28:29.931 Firmware Slot 1 Read-Only: N/A 00:28:29.931 Firmware Activation Without Reset: N/A 00:28:29.931 Multiple Update Detection Support: N/A 00:28:29.931 Firmware Update Granularity: No Information Provided 00:28:29.931 Per-Namespace SMART Log: No 00:28:29.931 Asymmetric Namespace Access Log Page: Not Supported 00:28:29.931 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:29.931 Command Effects Log Page: Not Supported 00:28:29.931 Get Log Page Extended Data: Supported 00:28:29.931 Telemetry Log Pages: Not Supported 00:28:29.931 Persistent Event Log Pages: Not Supported 00:28:29.931 Supported Log Pages Log Page: May Support 00:28:29.931 Commands Supported & Effects Log Page: Not Supported 00:28:29.931 Feature Identifiers & Effects Log Page:May Support 00:28:29.931 NVMe-MI Commands & Effects Log Page: May Support 00:28:29.931 Data Area 4 for Telemetry Log: Not Supported 00:28:29.931 Error Log Page Entries Supported: 1 00:28:29.931 Keep Alive: Not Supported 00:28:29.931 00:28:29.931 NVM Command Set Attributes 00:28:29.931 ========================== 00:28:29.931 Submission Queue Entry Size 00:28:29.931 Max: 1 00:28:29.931 Min: 1 00:28:29.931 Completion Queue Entry Size 00:28:29.931 Max: 1 00:28:29.931 Min: 1 00:28:29.931 Number of Namespaces: 0 00:28:29.931 Compare Command: Not Supported 00:28:29.931 Write Uncorrectable Command: Not Supported 00:28:29.931 Dataset Management Command: Not Supported 00:28:29.931 Write Zeroes Command: Not Supported 00:28:29.931 Set Features Save Field: Not Supported 00:28:29.931 Reservations: Not Supported 00:28:29.931 Timestamp: Not Supported 00:28:29.931 Copy: Not Supported 00:28:29.931 Volatile Write Cache: Not Present 00:28:29.931 Atomic Write Unit (Normal): 1 00:28:29.931 Atomic Write Unit (PFail): 1 00:28:29.931 Atomic Compare & Write Unit: 1 00:28:29.931 Fused Compare & Write: Not Supported 00:28:29.931 Scatter-Gather List 00:28:29.931 SGL Command Set: Supported 00:28:29.931 SGL Keyed: Not Supported 00:28:29.931 SGL Bit Bucket Descriptor: Not Supported 00:28:29.931 SGL Metadata Pointer: Not Supported 00:28:29.931 Oversized SGL: Not Supported 00:28:29.931 SGL Metadata Address: Not Supported 00:28:29.931 SGL Offset: Supported 00:28:29.931 Transport SGL Data Block: Not Supported 00:28:29.931 Replay Protected Memory Block: Not Supported 00:28:29.931 00:28:29.931 Firmware Slot Information 00:28:29.931 ========================= 00:28:29.931 Active slot: 0 00:28:29.931 00:28:29.931 00:28:29.931 Error Log 00:28:29.931 ========= 00:28:29.931 00:28:29.931 Active Namespaces 00:28:29.931 ================= 00:28:29.931 Discovery Log Page 00:28:29.931 ================== 00:28:29.931 Generation Counter: 2 00:28:29.931 Number of Records: 2 00:28:29.931 Record Format: 0 00:28:29.931 00:28:29.931 Discovery Log Entry 0 00:28:29.931 ---------------------- 00:28:29.931 Transport Type: 3 (TCP) 00:28:29.931 Address Family: 1 (IPv4) 00:28:29.931 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:29.931 Entry Flags: 00:28:29.931 Duplicate Returned Information: 0 00:28:29.931 Explicit Persistent Connection Support for Discovery: 0 00:28:29.931 Transport Requirements: 00:28:29.931 Secure Channel: Not Specified 00:28:29.931 Port ID: 1 (0x0001) 00:28:29.931 Controller ID: 65535 (0xffff) 00:28:29.931 Admin Max SQ Size: 32 00:28:29.931 Transport Service Identifier: 4420 00:28:29.931 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:29.931 Transport Address: 10.0.0.1 00:28:29.931 Discovery Log Entry 1 00:28:29.931 ---------------------- 00:28:29.931 Transport Type: 3 (TCP) 00:28:29.931 Address Family: 1 (IPv4) 00:28:29.931 Subsystem Type: 2 (NVM Subsystem) 00:28:29.931 Entry Flags: 00:28:29.931 Duplicate Returned Information: 0 00:28:29.931 Explicit Persistent Connection Support for Discovery: 0 00:28:29.931 Transport Requirements: 00:28:29.931 Secure Channel: Not Specified 00:28:29.931 Port ID: 1 (0x0001) 00:28:29.931 Controller ID: 65535 (0xffff) 00:28:29.931 Admin Max SQ Size: 32 00:28:29.931 Transport Service Identifier: 4420 00:28:29.931 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:29.931 Transport Address: 10.0.0.1 00:28:29.931 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:29.931 get_feature(0x01) failed 00:28:29.931 get_feature(0x02) failed 00:28:29.931 get_feature(0x04) failed 00:28:29.931 ===================================================== 00:28:29.931 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:29.931 ===================================================== 00:28:29.931 Controller Capabilities/Features 00:28:29.931 ================================ 00:28:29.931 Vendor ID: 0000 00:28:29.931 Subsystem Vendor ID: 0000 00:28:29.931 Serial Number: a0ad026be1649018d680 00:28:29.931 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:29.931 Firmware Version: 6.8.9-20 00:28:29.931 Recommended Arb Burst: 6 00:28:29.931 IEEE OUI Identifier: 00 00 00 00:28:29.931 Multi-path I/O 00:28:29.931 May have multiple subsystem ports: Yes 00:28:29.931 May have multiple controllers: Yes 00:28:29.931 Associated with SR-IOV VF: No 00:28:29.931 Max Data Transfer Size: Unlimited 00:28:29.931 Max Number of Namespaces: 1024 00:28:29.931 Max Number of I/O Queues: 128 00:28:29.931 NVMe Specification Version (VS): 1.3 00:28:29.931 NVMe Specification Version (Identify): 1.3 00:28:29.931 Maximum Queue Entries: 1024 00:28:29.931 Contiguous Queues Required: No 00:28:29.931 Arbitration Mechanisms Supported 00:28:29.931 Weighted Round Robin: Not Supported 00:28:29.931 Vendor Specific: Not Supported 00:28:29.931 Reset Timeout: 7500 ms 00:28:29.931 Doorbell Stride: 4 bytes 00:28:29.931 NVM Subsystem Reset: Not Supported 00:28:29.931 Command Sets Supported 00:28:29.931 NVM Command Set: Supported 00:28:29.931 Boot Partition: Not Supported 00:28:29.931 Memory Page Size Minimum: 4096 bytes 00:28:29.931 Memory Page Size Maximum: 4096 bytes 00:28:29.931 Persistent Memory Region: Not Supported 00:28:29.931 Optional Asynchronous Events Supported 00:28:29.931 Namespace Attribute Notices: Supported 00:28:29.931 Firmware Activation Notices: Not Supported 00:28:29.931 ANA Change Notices: Supported 00:28:29.931 PLE Aggregate Log Change Notices: Not Supported 00:28:29.931 LBA Status Info Alert Notices: Not Supported 00:28:29.931 EGE Aggregate Log Change Notices: Not Supported 00:28:29.931 Normal NVM Subsystem Shutdown event: Not Supported 00:28:29.931 Zone Descriptor Change Notices: Not Supported 00:28:29.931 Discovery Log Change Notices: Not Supported 00:28:29.931 Controller Attributes 00:28:29.931 128-bit Host Identifier: Supported 00:28:29.931 Non-Operational Permissive Mode: Not Supported 00:28:29.931 NVM Sets: Not Supported 00:28:29.931 Read Recovery Levels: Not Supported 00:28:29.931 Endurance Groups: Not Supported 00:28:29.931 Predictable Latency Mode: Not Supported 00:28:29.931 Traffic Based Keep ALive: Supported 00:28:29.931 Namespace Granularity: Not Supported 00:28:29.932 SQ Associations: Not Supported 00:28:29.932 UUID List: Not Supported 00:28:29.932 Multi-Domain Subsystem: Not Supported 00:28:29.932 Fixed Capacity Management: Not Supported 00:28:29.932 Variable Capacity Management: Not Supported 00:28:29.932 Delete Endurance Group: Not Supported 00:28:29.932 Delete NVM Set: Not Supported 00:28:29.932 Extended LBA Formats Supported: Not Supported 00:28:29.932 Flexible Data Placement Supported: Not Supported 00:28:29.932 00:28:29.932 Controller Memory Buffer Support 00:28:29.932 ================================ 00:28:29.932 Supported: No 00:28:29.932 00:28:29.932 Persistent Memory Region Support 00:28:29.932 ================================ 00:28:29.932 Supported: No 00:28:29.932 00:28:29.932 Admin Command Set Attributes 00:28:29.932 ============================ 00:28:29.932 Security Send/Receive: Not Supported 00:28:29.932 Format NVM: Not Supported 00:28:29.932 Firmware Activate/Download: Not Supported 00:28:29.932 Namespace Management: Not Supported 00:28:29.932 Device Self-Test: Not Supported 00:28:29.932 Directives: Not Supported 00:28:29.932 NVMe-MI: Not Supported 00:28:29.932 Virtualization Management: Not Supported 00:28:29.932 Doorbell Buffer Config: Not Supported 00:28:29.932 Get LBA Status Capability: Not Supported 00:28:29.932 Command & Feature Lockdown Capability: Not Supported 00:28:29.932 Abort Command Limit: 4 00:28:29.932 Async Event Request Limit: 4 00:28:29.932 Number of Firmware Slots: N/A 00:28:29.932 Firmware Slot 1 Read-Only: N/A 00:28:29.932 Firmware Activation Without Reset: N/A 00:28:29.932 Multiple Update Detection Support: N/A 00:28:29.932 Firmware Update Granularity: No Information Provided 00:28:29.932 Per-Namespace SMART Log: Yes 00:28:29.932 Asymmetric Namespace Access Log Page: Supported 00:28:29.932 ANA Transition Time : 10 sec 00:28:29.932 00:28:29.932 Asymmetric Namespace Access Capabilities 00:28:29.932 ANA Optimized State : Supported 00:28:29.932 ANA Non-Optimized State : Supported 00:28:29.932 ANA Inaccessible State : Supported 00:28:29.932 ANA Persistent Loss State : Supported 00:28:29.932 ANA Change State : Supported 00:28:29.932 ANAGRPID is not changed : No 00:28:29.932 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:29.932 00:28:29.932 ANA Group Identifier Maximum : 128 00:28:29.932 Number of ANA Group Identifiers : 128 00:28:29.932 Max Number of Allowed Namespaces : 1024 00:28:29.932 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:29.932 Command Effects Log Page: Supported 00:28:29.932 Get Log Page Extended Data: Supported 00:28:29.932 Telemetry Log Pages: Not Supported 00:28:29.932 Persistent Event Log Pages: Not Supported 00:28:29.932 Supported Log Pages Log Page: May Support 00:28:29.932 Commands Supported & Effects Log Page: Not Supported 00:28:29.932 Feature Identifiers & Effects Log Page:May Support 00:28:29.932 NVMe-MI Commands & Effects Log Page: May Support 00:28:29.932 Data Area 4 for Telemetry Log: Not Supported 00:28:29.932 Error Log Page Entries Supported: 128 00:28:29.932 Keep Alive: Supported 00:28:29.932 Keep Alive Granularity: 1000 ms 00:28:29.932 00:28:29.932 NVM Command Set Attributes 00:28:29.932 ========================== 00:28:29.932 Submission Queue Entry Size 00:28:29.932 Max: 64 00:28:29.932 Min: 64 00:28:29.932 Completion Queue Entry Size 00:28:29.932 Max: 16 00:28:29.932 Min: 16 00:28:29.932 Number of Namespaces: 1024 00:28:29.932 Compare Command: Not Supported 00:28:29.932 Write Uncorrectable Command: Not Supported 00:28:29.932 Dataset Management Command: Supported 00:28:29.932 Write Zeroes Command: Supported 00:28:29.932 Set Features Save Field: Not Supported 00:28:29.932 Reservations: Not Supported 00:28:29.932 Timestamp: Not Supported 00:28:29.932 Copy: Not Supported 00:28:29.932 Volatile Write Cache: Present 00:28:29.932 Atomic Write Unit (Normal): 1 00:28:29.932 Atomic Write Unit (PFail): 1 00:28:29.932 Atomic Compare & Write Unit: 1 00:28:29.932 Fused Compare & Write: Not Supported 00:28:29.932 Scatter-Gather List 00:28:29.932 SGL Command Set: Supported 00:28:29.932 SGL Keyed: Not Supported 00:28:29.932 SGL Bit Bucket Descriptor: Not Supported 00:28:29.932 SGL Metadata Pointer: Not Supported 00:28:29.932 Oversized SGL: Not Supported 00:28:29.932 SGL Metadata Address: Not Supported 00:28:29.932 SGL Offset: Supported 00:28:29.932 Transport SGL Data Block: Not Supported 00:28:29.932 Replay Protected Memory Block: Not Supported 00:28:29.932 00:28:29.932 Firmware Slot Information 00:28:29.932 ========================= 00:28:29.932 Active slot: 0 00:28:29.932 00:28:29.932 Asymmetric Namespace Access 00:28:29.932 =========================== 00:28:29.932 Change Count : 0 00:28:29.932 Number of ANA Group Descriptors : 1 00:28:29.932 ANA Group Descriptor : 0 00:28:29.932 ANA Group ID : 1 00:28:29.932 Number of NSID Values : 1 00:28:29.932 Change Count : 0 00:28:29.932 ANA State : 1 00:28:29.932 Namespace Identifier : 1 00:28:29.932 00:28:29.932 Commands Supported and Effects 00:28:29.932 ============================== 00:28:29.932 Admin Commands 00:28:29.932 -------------- 00:28:29.932 Get Log Page (02h): Supported 00:28:29.932 Identify (06h): Supported 00:28:29.932 Abort (08h): Supported 00:28:29.932 Set Features (09h): Supported 00:28:29.932 Get Features (0Ah): Supported 00:28:29.932 Asynchronous Event Request (0Ch): Supported 00:28:29.932 Keep Alive (18h): Supported 00:28:29.932 I/O Commands 00:28:29.932 ------------ 00:28:29.932 Flush (00h): Supported 00:28:29.932 Write (01h): Supported LBA-Change 00:28:29.932 Read (02h): Supported 00:28:29.932 Write Zeroes (08h): Supported LBA-Change 00:28:29.932 Dataset Management (09h): Supported 00:28:29.932 00:28:29.932 Error Log 00:28:29.932 ========= 00:28:29.932 Entry: 0 00:28:29.932 Error Count: 0x3 00:28:29.932 Submission Queue Id: 0x0 00:28:29.932 Command Id: 0x5 00:28:29.932 Phase Bit: 0 00:28:29.932 Status Code: 0x2 00:28:29.932 Status Code Type: 0x0 00:28:29.932 Do Not Retry: 1 00:28:29.932 Error Location: 0x28 00:28:29.932 LBA: 0x0 00:28:29.932 Namespace: 0x0 00:28:29.932 Vendor Log Page: 0x0 00:28:29.932 ----------- 00:28:29.932 Entry: 1 00:28:29.932 Error Count: 0x2 00:28:29.932 Submission Queue Id: 0x0 00:28:29.932 Command Id: 0x5 00:28:29.932 Phase Bit: 0 00:28:29.932 Status Code: 0x2 00:28:29.932 Status Code Type: 0x0 00:28:29.932 Do Not Retry: 1 00:28:29.932 Error Location: 0x28 00:28:29.932 LBA: 0x0 00:28:29.932 Namespace: 0x0 00:28:29.932 Vendor Log Page: 0x0 00:28:29.932 ----------- 00:28:29.932 Entry: 2 00:28:29.932 Error Count: 0x1 00:28:29.932 Submission Queue Id: 0x0 00:28:29.932 Command Id: 0x4 00:28:29.932 Phase Bit: 0 00:28:29.932 Status Code: 0x2 00:28:29.932 Status Code Type: 0x0 00:28:29.932 Do Not Retry: 1 00:28:29.932 Error Location: 0x28 00:28:29.932 LBA: 0x0 00:28:29.932 Namespace: 0x0 00:28:29.932 Vendor Log Page: 0x0 00:28:29.932 00:28:29.932 Number of Queues 00:28:29.932 ================ 00:28:29.932 Number of I/O Submission Queues: 128 00:28:29.932 Number of I/O Completion Queues: 128 00:28:29.932 00:28:29.932 ZNS Specific Controller Data 00:28:29.932 ============================ 00:28:29.932 Zone Append Size Limit: 0 00:28:29.932 00:28:29.932 00:28:29.932 Active Namespaces 00:28:29.932 ================= 00:28:29.932 get_feature(0x05) failed 00:28:29.932 Namespace ID:1 00:28:29.932 Command Set Identifier: NVM (00h) 00:28:29.932 Deallocate: Supported 00:28:29.932 Deallocated/Unwritten Error: Not Supported 00:28:29.932 Deallocated Read Value: Unknown 00:28:29.932 Deallocate in Write Zeroes: Not Supported 00:28:29.932 Deallocated Guard Field: 0xFFFF 00:28:29.932 Flush: Supported 00:28:29.932 Reservation: Not Supported 00:28:29.932 Namespace Sharing Capabilities: Multiple Controllers 00:28:29.932 Size (in LBAs): 1953525168 (931GiB) 00:28:29.932 Capacity (in LBAs): 1953525168 (931GiB) 00:28:29.932 Utilization (in LBAs): 1953525168 (931GiB) 00:28:29.932 UUID: 4177a944-255b-4936-92b5-0972ecfed0a7 00:28:29.932 Thin Provisioning: Not Supported 00:28:29.932 Per-NS Atomic Units: Yes 00:28:29.932 Atomic Boundary Size (Normal): 0 00:28:29.932 Atomic Boundary Size (PFail): 0 00:28:29.932 Atomic Boundary Offset: 0 00:28:29.932 NGUID/EUI64 Never Reused: No 00:28:29.932 ANA group ID: 1 00:28:29.932 Namespace Write Protected: No 00:28:29.932 Number of LBA Formats: 1 00:28:29.932 Current LBA Format: LBA Format #00 00:28:29.932 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:29.932 00:28:29.932 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:29.932 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:29.932 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:29.932 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:29.932 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:29.932 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:29.932 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:29.932 rmmod nvme_tcp 00:28:29.932 rmmod nvme_fabrics 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.191 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.098 14:00:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:32.098 14:00:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:32.098 14:00:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:32.098 14:00:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:32.098 14:00:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:32.098 14:00:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:32.098 14:00:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:32.098 14:00:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:32.098 14:00:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:32.098 14:00:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:32.098 14:00:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:33.476 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:33.476 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:33.476 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:33.476 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:33.476 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:33.476 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:33.476 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:33.476 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:33.476 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:33.476 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:33.476 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:33.476 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:33.476 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:33.476 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:33.476 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:33.476 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:34.411 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:28:34.669 00:28:34.669 real 0m10.022s 00:28:34.669 user 0m2.207s 00:28:34.669 sys 0m3.769s 00:28:34.669 14:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:34.669 14:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:34.669 ************************************ 00:28:34.669 END TEST nvmf_identify_kernel_target 00:28:34.669 ************************************ 00:28:34.669 14:00:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:34.669 14:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:34.669 14:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:34.669 14:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.669 ************************************ 00:28:34.670 START TEST nvmf_auth_host 00:28:34.670 ************************************ 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:34.670 * Looking for test storage... 00:28:34.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:34.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.670 --rc genhtml_branch_coverage=1 00:28:34.670 --rc genhtml_function_coverage=1 00:28:34.670 --rc genhtml_legend=1 00:28:34.670 --rc geninfo_all_blocks=1 00:28:34.670 --rc geninfo_unexecuted_blocks=1 00:28:34.670 00:28:34.670 ' 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:34.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.670 --rc genhtml_branch_coverage=1 00:28:34.670 --rc genhtml_function_coverage=1 00:28:34.670 --rc genhtml_legend=1 00:28:34.670 --rc geninfo_all_blocks=1 00:28:34.670 --rc geninfo_unexecuted_blocks=1 00:28:34.670 00:28:34.670 ' 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:34.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.670 --rc genhtml_branch_coverage=1 00:28:34.670 --rc genhtml_function_coverage=1 00:28:34.670 --rc genhtml_legend=1 00:28:34.670 --rc geninfo_all_blocks=1 00:28:34.670 --rc geninfo_unexecuted_blocks=1 00:28:34.670 00:28:34.670 ' 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:34.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.670 --rc genhtml_branch_coverage=1 00:28:34.670 --rc genhtml_function_coverage=1 00:28:34.670 --rc genhtml_legend=1 00:28:34.670 --rc geninfo_all_blocks=1 00:28:34.670 --rc geninfo_unexecuted_blocks=1 00:28:34.670 00:28:34.670 ' 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:34.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:34.670 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:34.671 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:37.206 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:37.206 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:37.206 Found net devices under 0000:09:00.0: cvl_0_0 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:37.206 Found net devices under 0000:09:00.1: cvl_0_1 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:37.206 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:37.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:28:37.207 00:28:37.207 --- 10.0.0.2 ping statistics --- 00:28:37.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.207 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:28:37.207 00:28:37.207 --- 10.0.0.1 ping statistics --- 00:28:37.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.207 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2336942 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2336942 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2336942 ']' 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=926831201ee949c7fa64ccf781d65a87 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.L8g 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 926831201ee949c7fa64ccf781d65a87 0 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 926831201ee949c7fa64ccf781d65a87 0 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=926831201ee949c7fa64ccf781d65a87 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.L8g 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.L8g 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.L8g 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=337e3d717639d6bde061400c1e0862bc3c672daa3f6eaf81381d10c9956a4d64 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.wt8 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 337e3d717639d6bde061400c1e0862bc3c672daa3f6eaf81381d10c9956a4d64 3 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 337e3d717639d6bde061400c1e0862bc3c672daa3f6eaf81381d10c9956a4d64 3 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=337e3d717639d6bde061400c1e0862bc3c672daa3f6eaf81381d10c9956a4d64 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.wt8 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.wt8 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.wt8 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:37.207 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:37.208 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:37.208 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ee3875ac3a90cfe36927d8478d592757a0fa9a98e55b8a01 00:28:37.208 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kIk 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ee3875ac3a90cfe36927d8478d592757a0fa9a98e55b8a01 0 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ee3875ac3a90cfe36927d8478d592757a0fa9a98e55b8a01 0 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ee3875ac3a90cfe36927d8478d592757a0fa9a98e55b8a01 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kIk 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kIk 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.kIk 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=49672d7511735b5e1716c1be82ef09a9da940c382006e213 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.rcz 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 49672d7511735b5e1716c1be82ef09a9da940c382006e213 2 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 49672d7511735b5e1716c1be82ef09a9da940c382006e213 2 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=49672d7511735b5e1716c1be82ef09a9da940c382006e213 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.rcz 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.rcz 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.rcz 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=55aff3c5cfeaf5927d094c96b20aef4c 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.d9M 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 55aff3c5cfeaf5927d094c96b20aef4c 1 00:28:37.467 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 55aff3c5cfeaf5927d094c96b20aef4c 1 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=55aff3c5cfeaf5927d094c96b20aef4c 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.d9M 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.d9M 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.d9M 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5dd1ece23d1e5d8e592411379df6dc37 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Co0 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5dd1ece23d1e5d8e592411379df6dc37 1 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5dd1ece23d1e5d8e592411379df6dc37 1 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5dd1ece23d1e5d8e592411379df6dc37 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Co0 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Co0 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Co0 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5a00551594352615b468ded54d94cf9e2d031a7b19b3112b 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.nlz 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5a00551594352615b468ded54d94cf9e2d031a7b19b3112b 2 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5a00551594352615b468ded54d94cf9e2d031a7b19b3112b 2 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5a00551594352615b468ded54d94cf9e2d031a7b19b3112b 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.nlz 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.nlz 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.nlz 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dfb83ec822299762ac035e5c569dcdd6 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.VQp 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dfb83ec822299762ac035e5c569dcdd6 0 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dfb83ec822299762ac035e5c569dcdd6 0 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dfb83ec822299762ac035e5c569dcdd6 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:37.468 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.VQp 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.VQp 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.VQp 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=94f25cfbb1972ede9c116646cf5bfc1ed484241e92228232c2c5565b1e755ac4 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.yB5 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 94f25cfbb1972ede9c116646cf5bfc1ed484241e92228232c2c5565b1e755ac4 3 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 94f25cfbb1972ede9c116646cf5bfc1ed484241e92228232c2c5565b1e755ac4 3 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=94f25cfbb1972ede9c116646cf5bfc1ed484241e92228232c2c5565b1e755ac4 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.yB5 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.yB5 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.yB5 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2336942 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2336942 ']' 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.741 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.L8g 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.wt8 ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wt8 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.kIk 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.rcz ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rcz 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.d9M 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Co0 ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Co0 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.nlz 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.VQp ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.VQp 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.yB5 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.001 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.002 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.002 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:38.002 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:38.002 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:38.002 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:38.002 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:38.002 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:38.002 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:38.002 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:38.002 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:38.002 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:38.002 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:38.937 Waiting for block devices as requested 00:28:39.194 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:39.194 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:39.476 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:39.476 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:39.476 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:39.476 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:39.736 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:39.736 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:39.736 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:28:39.994 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:39.994 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:39.994 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:39.994 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:39.994 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:40.251 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:40.251 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:40.251 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:40.510 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:40.510 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:40.510 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:40.510 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:40.510 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:40.510 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:40.510 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:40.768 No valid GPT data, bailing 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:28:40.768 00:28:40.768 Discovery Log Number of Records 2, Generation counter 2 00:28:40.768 =====Discovery Log Entry 0====== 00:28:40.768 trtype: tcp 00:28:40.768 adrfam: ipv4 00:28:40.768 subtype: current discovery subsystem 00:28:40.768 treq: not specified, sq flow control disable supported 00:28:40.768 portid: 1 00:28:40.768 trsvcid: 4420 00:28:40.768 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:40.768 traddr: 10.0.0.1 00:28:40.768 eflags: none 00:28:40.768 sectype: none 00:28:40.768 =====Discovery Log Entry 1====== 00:28:40.768 trtype: tcp 00:28:40.768 adrfam: ipv4 00:28:40.768 subtype: nvme subsystem 00:28:40.768 treq: not specified, sq flow control disable supported 00:28:40.768 portid: 1 00:28:40.768 trsvcid: 4420 00:28:40.768 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:40.768 traddr: 10.0.0.1 00:28:40.768 eflags: none 00:28:40.768 sectype: none 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:40.768 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.769 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.028 nvme0n1 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.028 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.287 nvme0n1 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:41.287 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.288 nvme0n1 00:28:41.288 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.546 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.546 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.546 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.546 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.546 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.546 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.546 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.546 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.547 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.547 nvme0n1 00:28:41.547 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.547 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.547 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.547 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.547 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.547 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.805 nvme0n1 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.805 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.062 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.062 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.062 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.063 nvme0n1 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.063 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.322 nvme0n1 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.322 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.581 nvme0n1 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.581 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.582 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.582 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.840 nvme0n1 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:42.840 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.841 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.100 nvme0n1 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.100 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.101 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.101 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.101 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.101 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.101 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.101 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.101 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.359 nvme0n1 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.359 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.360 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.360 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.360 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.360 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.360 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.360 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.360 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.360 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:43.360 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.360 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.924 nvme0n1 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.924 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.925 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.925 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.925 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.925 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.925 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.925 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.925 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.925 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:43.925 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.925 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.181 nvme0n1 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.181 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.437 nvme0n1 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.437 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.694 nvme0n1 00:28:44.694 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.694 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.694 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.694 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.694 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.694 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.694 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.694 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.694 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.694 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.951 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.208 nvme0n1 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.208 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.774 nvme0n1 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.774 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.775 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.341 nvme0n1 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:46.341 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.342 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.908 nvme0n1 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.908 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.474 nvme0n1 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.474 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.040 nvme0n1 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.040 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.986 nvme0n1 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.986 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.987 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.921 nvme0n1 00:28:49.921 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.921 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.921 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.922 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.856 nvme0n1 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.856 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.857 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.857 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.857 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.857 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.857 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:50.857 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.857 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.792 nvme0n1 00:28:51.792 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.792 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.792 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.792 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.792 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.792 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.792 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.728 nvme0n1 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.728 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.729 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.729 nvme0n1 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.729 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.987 nvme0n1 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:52.987 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.988 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.246 nvme0n1 00:28:53.246 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.246 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.246 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.246 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.246 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.247 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.506 nvme0n1 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.506 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.507 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.507 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.507 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.507 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.507 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.507 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.507 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.507 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.507 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.507 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.507 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:53.507 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.507 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.507 nvme0n1 00:28:53.507 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.507 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.507 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.507 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.507 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.507 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.767 nvme0n1 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.767 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.026 nvme0n1 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.026 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.284 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.285 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.285 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.285 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:54.285 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.285 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.285 nvme0n1 00:28:54.285 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.285 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.285 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.285 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.285 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.285 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.543 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.543 nvme0n1 00:28:54.543 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.543 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.543 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.543 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.543 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.543 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.802 nvme0n1 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.802 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.061 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.320 nvme0n1 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.320 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.321 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.579 nvme0n1 00:28:55.579 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.579 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.579 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.579 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.579 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.579 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.837 nvme0n1 00:28:55.837 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.837 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.837 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.837 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.838 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.838 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:56.096 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.097 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.355 nvme0n1 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.355 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.613 nvme0n1 00:28:56.613 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.613 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.614 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.179 nvme0n1 00:28:57.179 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.179 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.179 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.179 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.179 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.179 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.179 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.179 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.180 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.746 nvme0n1 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:57.746 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.747 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.356 nvme0n1 00:28:58.356 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.356 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.356 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.356 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.356 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.356 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.356 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.356 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.356 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.356 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.356 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.356 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.357 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.925 nvme0n1 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.925 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.490 nvme0n1 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.491 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.491 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.491 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.491 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.491 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.491 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:59.491 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.491 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.421 nvme0n1 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.421 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.353 nvme0n1 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.353 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.284 nvme0n1 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.284 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.285 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.285 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.285 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.285 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.285 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.285 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:02.285 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.285 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.222 nvme0n1 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.222 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.787 nvme0n1 00:29:03.787 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.787 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.787 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.787 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.787 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.787 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.044 nvme0n1 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.044 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.302 nvme0n1 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.302 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.560 nvme0n1 00:29:04.560 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.560 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.560 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.560 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.560 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.560 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.560 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.560 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.560 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.560 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.560 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.817 nvme0n1 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.817 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.075 nvme0n1 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.075 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.333 nvme0n1 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.333 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.591 nvme0n1 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.591 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.849 nvme0n1 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.849 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.106 nvme0n1 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.106 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.107 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.364 nvme0n1 00:29:06.364 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.364 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.364 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.364 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.364 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.364 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.364 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.365 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.623 nvme0n1 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.623 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.624 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.624 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.624 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.624 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.624 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.624 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.624 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.624 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.624 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.624 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.624 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:06.624 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.624 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.882 nvme0n1 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.882 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.139 nvme0n1 00:29:07.139 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.139 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.139 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.139 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.139 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.397 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.398 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.656 nvme0n1 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.656 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.657 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.657 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.657 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.657 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.657 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.657 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.657 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.657 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.657 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:07.657 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.657 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.916 nvme0n1 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.916 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.917 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.483 nvme0n1 00:29:08.483 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.483 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.483 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.483 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.483 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.483 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.483 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.483 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.483 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.483 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.483 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.483 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.483 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.484 14:00:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.050 nvme0n1 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.050 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.051 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.051 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.051 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.051 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.051 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.051 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.051 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.051 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:09.051 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.051 14:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.617 nvme0n1 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.617 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.184 nvme0n1 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.184 14:00:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.754 nvme0n1 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTI2ODMxMjAxZWU5NDljN2ZhNjRjY2Y3ODFkNjVhODdL55g8: 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: ]] 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzM3ZTNkNzE3NjM5ZDZiZGUwNjE0MDBjMWUwODYyYmMzYzY3MmRhYTNmNmVhZjgxMzgxZDEwYzk5NTZhNGQ2NLIiyvE=: 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.754 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.755 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.755 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.755 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.755 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.755 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.755 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.755 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.755 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.755 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.755 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.755 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.755 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:10.755 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.755 14:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.693 nvme0n1 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:29:11.693 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.694 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.633 nvme0n1 00:29:12.633 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.633 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.633 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.633 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.633 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.633 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.633 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.633 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.633 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.633 14:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.633 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.571 nvme0n1 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWEwMDU1MTU5NDM1MjYxNWI0NjhkZWQ1NGQ5NGNmOWUyZDAzMWE3YjE5YjMxMTJirWWRPg==: 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: ]] 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGZiODNlYzgyMjI5OTc2MmFjMDM1ZTVjNTY5ZGNkZDZXY/N1: 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.571 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.572 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.572 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.572 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.572 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.572 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.572 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.572 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.572 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.572 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.572 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.572 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:13.572 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.572 14:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.508 nvme0n1 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTRmMjVjZmJiMTk3MmVkZTljMTE2NjQ2Y2Y1YmZjMWVkNDg0MjQxZTkyMjI4MjMyYzJjNTU2NWIxZTc1NWFjNC0qTfA=: 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.508 14:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.450 nvme0n1 00:29:15.450 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.451 request: 00:29:15.451 { 00:29:15.451 "name": "nvme0", 00:29:15.451 "trtype": "tcp", 00:29:15.451 "traddr": "10.0.0.1", 00:29:15.451 "adrfam": "ipv4", 00:29:15.451 "trsvcid": "4420", 00:29:15.451 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:15.451 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:15.451 "prchk_reftag": false, 00:29:15.451 "prchk_guard": false, 00:29:15.451 "hdgst": false, 00:29:15.451 "ddgst": false, 00:29:15.451 "allow_unrecognized_csi": false, 00:29:15.451 "method": "bdev_nvme_attach_controller", 00:29:15.451 "req_id": 1 00:29:15.451 } 00:29:15.451 Got JSON-RPC error response 00:29:15.451 response: 00:29:15.451 { 00:29:15.451 "code": -5, 00:29:15.451 "message": "Input/output error" 00:29:15.451 } 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.451 request: 00:29:15.451 { 00:29:15.451 "name": "nvme0", 00:29:15.451 "trtype": "tcp", 00:29:15.451 "traddr": "10.0.0.1", 00:29:15.451 "adrfam": "ipv4", 00:29:15.451 "trsvcid": "4420", 00:29:15.451 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:15.451 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:15.451 "prchk_reftag": false, 00:29:15.451 "prchk_guard": false, 00:29:15.451 "hdgst": false, 00:29:15.451 "ddgst": false, 00:29:15.451 "dhchap_key": "key2", 00:29:15.451 "allow_unrecognized_csi": false, 00:29:15.451 "method": "bdev_nvme_attach_controller", 00:29:15.451 "req_id": 1 00:29:15.451 } 00:29:15.451 Got JSON-RPC error response 00:29:15.451 response: 00:29:15.451 { 00:29:15.451 "code": -5, 00:29:15.451 "message": "Input/output error" 00:29:15.451 } 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.451 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.452 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:15.711 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.712 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:15.712 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.712 14:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.712 request: 00:29:15.712 { 00:29:15.712 "name": "nvme0", 00:29:15.712 "trtype": "tcp", 00:29:15.712 "traddr": "10.0.0.1", 00:29:15.712 "adrfam": "ipv4", 00:29:15.712 "trsvcid": "4420", 00:29:15.712 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:15.712 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:15.712 "prchk_reftag": false, 00:29:15.712 "prchk_guard": false, 00:29:15.712 "hdgst": false, 00:29:15.712 "ddgst": false, 00:29:15.712 "dhchap_key": "key1", 00:29:15.712 "dhchap_ctrlr_key": "ckey2", 00:29:15.712 "allow_unrecognized_csi": false, 00:29:15.712 "method": "bdev_nvme_attach_controller", 00:29:15.712 "req_id": 1 00:29:15.712 } 00:29:15.712 Got JSON-RPC error response 00:29:15.712 response: 00:29:15.712 { 00:29:15.712 "code": -5, 00:29:15.712 "message": "Input/output error" 00:29:15.712 } 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.712 nvme0n1 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.712 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.972 request: 00:29:15.972 { 00:29:15.972 "name": "nvme0", 00:29:15.972 "dhchap_key": "key1", 00:29:15.972 "dhchap_ctrlr_key": "ckey2", 00:29:15.972 "method": "bdev_nvme_set_keys", 00:29:15.972 "req_id": 1 00:29:15.972 } 00:29:15.972 Got JSON-RPC error response 00:29:15.972 response: 00:29:15.972 { 00:29:15.972 "code": -13, 00:29:15.972 "message": "Permission denied" 00:29:15.972 } 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:15.972 14:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:17.357 14:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.357 14:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:17.357 14:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.357 14:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.357 14:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.357 14:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:17.357 14:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.295 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzODc1YWMzYTkwY2ZlMzY5MjdkODQ3OGQ1OTI3NTdhMGZhOWE5OGU1NWI4YTAxg3sZZQ==: 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: ]] 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk2NzJkNzUxMTczNWI1ZTE3MTZjMWJlODJlZjA5YTlkYTk0MGMzODIwMDZlMjEz6LpkcQ==: 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.296 nvme0n1 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTVhZmYzYzVjZmVhZjU5MjdkMDk0Yzk2YjIwYWVmNGN06C0V: 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: ]] 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWRkMWVjZTIzZDFlNWQ4ZTU5MjQxMTM3OWRmNmRjMzfLJD+7: 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.296 request: 00:29:18.296 { 00:29:18.296 "name": "nvme0", 00:29:18.296 "dhchap_key": "key2", 00:29:18.296 "dhchap_ctrlr_key": "ckey1", 00:29:18.296 "method": "bdev_nvme_set_keys", 00:29:18.296 "req_id": 1 00:29:18.296 } 00:29:18.296 Got JSON-RPC error response 00:29:18.296 response: 00:29:18.296 { 00:29:18.296 "code": -13, 00:29:18.296 "message": "Permission denied" 00:29:18.296 } 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:18.296 14:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:19.714 rmmod nvme_tcp 00:29:19.714 rmmod nvme_fabrics 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2336942 ']' 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2336942 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2336942 ']' 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2336942 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2336942 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2336942' 00:29:19.714 killing process with pid 2336942 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2336942 00:29:19.714 14:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2336942 00:29:19.714 14:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:19.714 14:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:19.714 14:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:19.714 14:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:19.714 14:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:19.714 14:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:19.714 14:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:19.714 14:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:19.714 14:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:19.714 14:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.714 14:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.714 14:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.249 14:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:22.249 14:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:22.249 14:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:22.249 14:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:22.249 14:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:22.249 14:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:22.249 14:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:22.249 14:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:22.249 14:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:22.249 14:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:22.249 14:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:22.249 14:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:22.249 14:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:23.184 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:23.184 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:23.184 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:23.184 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:23.184 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:23.184 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:23.184 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:23.184 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:23.184 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:23.184 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:23.184 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:23.184 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:23.184 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:23.184 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:23.184 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:23.184 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:24.145 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:29:24.145 14:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.L8g /tmp/spdk.key-null.kIk /tmp/spdk.key-sha256.d9M /tmp/spdk.key-sha384.nlz /tmp/spdk.key-sha512.yB5 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:24.145 14:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:25.521 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:29:25.521 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:29:25.521 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:29:25.521 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:29:25.521 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:29:25.521 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:29:25.521 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:29:25.521 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:29:25.521 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:29:25.521 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:25.521 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:29:25.521 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:29:25.521 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:29:25.521 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:29:25.521 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:29:25.521 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:29:25.521 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:29:25.521 00:29:25.521 real 0m50.929s 00:29:25.521 user 0m48.433s 00:29:25.521 sys 0m5.967s 00:29:25.521 14:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.521 14:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.521 ************************************ 00:29:25.521 END TEST nvmf_auth_host 00:29:25.521 ************************************ 00:29:25.521 14:00:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:25.521 14:00:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:25.521 14:00:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:25.521 14:00:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.521 14:00:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.521 ************************************ 00:29:25.521 START TEST nvmf_digest 00:29:25.521 ************************************ 00:29:25.521 14:00:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:25.521 * Looking for test storage... 00:29:25.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:25.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.782 --rc genhtml_branch_coverage=1 00:29:25.782 --rc genhtml_function_coverage=1 00:29:25.782 --rc genhtml_legend=1 00:29:25.782 --rc geninfo_all_blocks=1 00:29:25.782 --rc geninfo_unexecuted_blocks=1 00:29:25.782 00:29:25.782 ' 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:25.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.782 --rc genhtml_branch_coverage=1 00:29:25.782 --rc genhtml_function_coverage=1 00:29:25.782 --rc genhtml_legend=1 00:29:25.782 --rc geninfo_all_blocks=1 00:29:25.782 --rc geninfo_unexecuted_blocks=1 00:29:25.782 00:29:25.782 ' 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:25.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.782 --rc genhtml_branch_coverage=1 00:29:25.782 --rc genhtml_function_coverage=1 00:29:25.782 --rc genhtml_legend=1 00:29:25.782 --rc geninfo_all_blocks=1 00:29:25.782 --rc geninfo_unexecuted_blocks=1 00:29:25.782 00:29:25.782 ' 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:25.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.782 --rc genhtml_branch_coverage=1 00:29:25.782 --rc genhtml_function_coverage=1 00:29:25.782 --rc genhtml_legend=1 00:29:25.782 --rc geninfo_all_blocks=1 00:29:25.782 --rc geninfo_unexecuted_blocks=1 00:29:25.782 00:29:25.782 ' 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:25.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:25.782 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:25.783 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:28.346 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.346 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:28.346 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:28.347 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:28.347 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:28.347 Found net devices under 0000:09:00.0: cvl_0_0 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:28.347 Found net devices under 0000:09:00.1: cvl_0_1 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:28.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:29:28.347 00:29:28.347 --- 10.0.0.2 ping statistics --- 00:29:28.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.347 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:29:28.347 00:29:28.347 --- 10.0.0.1 ping statistics --- 00:29:28.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.347 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:28.347 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:28.348 ************************************ 00:29:28.348 START TEST nvmf_digest_clean 00:29:28.348 ************************************ 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2346737 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2346737 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2346737 ']' 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:28.348 [2024-12-05 14:00:59.525944] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:29:28.348 [2024-12-05 14:00:59.526036] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.348 [2024-12-05 14:00:59.598333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.348 [2024-12-05 14:00:59.654074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.348 [2024-12-05 14:00:59.654130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.348 [2024-12-05 14:00:59.654159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.348 [2024-12-05 14:00:59.654169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.348 [2024-12-05 14:00:59.654179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.348 [2024-12-05 14:00:59.654815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.348 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:28.607 null0 00:29:28.607 [2024-12-05 14:00:59.904424] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.607 [2024-12-05 14:00:59.928648] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2346761 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2346761 /var/tmp/bperf.sock 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2346761 ']' 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:28.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.607 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:28.607 [2024-12-05 14:00:59.982583] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:29:28.607 [2024-12-05 14:00:59.982665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2346761 ] 00:29:28.607 [2024-12-05 14:01:00.066214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.866 [2024-12-05 14:01:00.146329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.866 14:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.866 14:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:28.866 14:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:28.866 14:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:28.866 14:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:29.124 14:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.124 14:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.690 nvme0n1 00:29:29.690 14:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:29.690 14:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:29.690 Running I/O for 2 seconds... 00:29:32.009 18851.00 IOPS, 73.64 MiB/s [2024-12-05T13:01:03.535Z] 18864.50 IOPS, 73.69 MiB/s 00:29:32.009 Latency(us) 00:29:32.009 [2024-12-05T13:01:03.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.009 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:32.009 nvme0n1 : 2.01 18870.10 73.71 0.00 0.00 6776.82 3349.62 14757.74 00:29:32.009 [2024-12-05T13:01:03.535Z] =================================================================================================================== 00:29:32.009 [2024-12-05T13:01:03.535Z] Total : 18870.10 73.71 0.00 0.00 6776.82 3349.62 14757.74 00:29:32.009 { 00:29:32.009 "results": [ 00:29:32.009 { 00:29:32.009 "job": "nvme0n1", 00:29:32.009 "core_mask": "0x2", 00:29:32.009 "workload": "randread", 00:29:32.009 "status": "finished", 00:29:32.009 "queue_depth": 128, 00:29:32.009 "io_size": 4096, 00:29:32.009 "runtime": 2.010694, 00:29:32.009 "iops": 18870.10156692167, 00:29:32.009 "mibps": 73.71133424578777, 00:29:32.009 "io_failed": 0, 00:29:32.009 "io_timeout": 0, 00:29:32.009 "avg_latency_us": 6776.816114732623, 00:29:32.009 "min_latency_us": 3349.617777777778, 00:29:32.009 "max_latency_us": 14757.736296296296 00:29:32.009 } 00:29:32.009 ], 00:29:32.009 "core_count": 1 00:29:32.009 } 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:32.009 | select(.opcode=="crc32c") 00:29:32.009 | "\(.module_name) \(.executed)"' 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2346761 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2346761 ']' 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2346761 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2346761 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2346761' 00:29:32.009 killing process with pid 2346761 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2346761 00:29:32.009 Received shutdown signal, test time was about 2.000000 seconds 00:29:32.009 00:29:32.009 Latency(us) 00:29:32.009 [2024-12-05T13:01:03.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.009 [2024-12-05T13:01:03.535Z] =================================================================================================================== 00:29:32.009 [2024-12-05T13:01:03.535Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:32.009 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2346761 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2347288 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2347288 /var/tmp/bperf.sock 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2347288 ']' 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:32.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.268 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:32.268 [2024-12-05 14:01:03.713296] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:29:32.268 [2024-12-05 14:01:03.713380] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2347288 ] 00:29:32.268 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:32.268 Zero copy mechanism will not be used. 00:29:32.268 [2024-12-05 14:01:03.778274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.525 [2024-12-05 14:01:03.832740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.526 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.526 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:32.526 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:32.526 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:32.526 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:32.783 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.784 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:33.349 nvme0n1 00:29:33.349 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:33.349 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:33.608 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:33.608 Zero copy mechanism will not be used. 00:29:33.608 Running I/O for 2 seconds... 00:29:35.527 6399.00 IOPS, 799.88 MiB/s [2024-12-05T13:01:07.053Z] 6253.50 IOPS, 781.69 MiB/s 00:29:35.527 Latency(us) 00:29:35.527 [2024-12-05T13:01:07.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.527 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:35.527 nvme0n1 : 2.00 6248.87 781.11 0.00 0.00 2556.47 679.63 10874.12 00:29:35.527 [2024-12-05T13:01:07.053Z] =================================================================================================================== 00:29:35.527 [2024-12-05T13:01:07.053Z] Total : 6248.87 781.11 0.00 0.00 2556.47 679.63 10874.12 00:29:35.527 { 00:29:35.527 "results": [ 00:29:35.527 { 00:29:35.527 "job": "nvme0n1", 00:29:35.527 "core_mask": "0x2", 00:29:35.527 "workload": "randread", 00:29:35.527 "status": "finished", 00:29:35.527 "queue_depth": 16, 00:29:35.527 "io_size": 131072, 00:29:35.527 "runtime": 2.004361, 00:29:35.527 "iops": 6248.874329524472, 00:29:35.527 "mibps": 781.109291190559, 00:29:35.527 "io_failed": 0, 00:29:35.527 "io_timeout": 0, 00:29:35.527 "avg_latency_us": 2556.4747208398016, 00:29:35.527 "min_latency_us": 679.6325925925926, 00:29:35.527 "max_latency_us": 10874.121481481481 00:29:35.527 } 00:29:35.527 ], 00:29:35.527 "core_count": 1 00:29:35.527 } 00:29:35.527 14:01:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:35.528 14:01:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:35.528 14:01:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:35.528 14:01:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:35.528 14:01:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:35.528 | select(.opcode=="crc32c") 00:29:35.528 | "\(.module_name) \(.executed)"' 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2347288 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2347288 ']' 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2347288 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2347288 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2347288' 00:29:35.788 killing process with pid 2347288 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2347288 00:29:35.788 Received shutdown signal, test time was about 2.000000 seconds 00:29:35.788 00:29:35.788 Latency(us) 00:29:35.788 [2024-12-05T13:01:07.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.788 [2024-12-05T13:01:07.314Z] =================================================================================================================== 00:29:35.788 [2024-12-05T13:01:07.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:35.788 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2347288 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2347702 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2347702 /var/tmp/bperf.sock 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2347702 ']' 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:36.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.046 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:36.046 [2024-12-05 14:01:07.510465] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:29:36.046 [2024-12-05 14:01:07.510552] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2347702 ] 00:29:36.302 [2024-12-05 14:01:07.576919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.302 [2024-12-05 14:01:07.632184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.302 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.302 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:36.302 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:36.302 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:36.302 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:36.869 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.869 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:37.127 nvme0n1 00:29:37.127 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:37.127 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:37.386 Running I/O for 2 seconds... 00:29:39.261 20096.00 IOPS, 78.50 MiB/s [2024-12-05T13:01:10.787Z] 19876.00 IOPS, 77.64 MiB/s 00:29:39.261 Latency(us) 00:29:39.261 [2024-12-05T13:01:10.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.261 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.261 nvme0n1 : 2.01 19876.66 77.64 0.00 0.00 6425.44 2742.80 12913.02 00:29:39.261 [2024-12-05T13:01:10.787Z] =================================================================================================================== 00:29:39.261 [2024-12-05T13:01:10.787Z] Total : 19876.66 77.64 0.00 0.00 6425.44 2742.80 12913.02 00:29:39.261 { 00:29:39.261 "results": [ 00:29:39.261 { 00:29:39.261 "job": "nvme0n1", 00:29:39.261 "core_mask": "0x2", 00:29:39.261 "workload": "randwrite", 00:29:39.261 "status": "finished", 00:29:39.261 "queue_depth": 128, 00:29:39.261 "io_size": 4096, 00:29:39.261 "runtime": 2.007983, 00:29:39.261 "iops": 19876.662302419893, 00:29:39.261 "mibps": 77.6432121188277, 00:29:39.261 "io_failed": 0, 00:29:39.261 "io_timeout": 0, 00:29:39.261 "avg_latency_us": 6425.4410820842895, 00:29:39.261 "min_latency_us": 2742.8029629629627, 00:29:39.261 "max_latency_us": 12913.01925925926 00:29:39.261 } 00:29:39.261 ], 00:29:39.261 "core_count": 1 00:29:39.261 } 00:29:39.261 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:39.261 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:39.261 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:39.261 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:39.261 | select(.opcode=="crc32c") 00:29:39.262 | "\(.module_name) \(.executed)"' 00:29:39.262 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:39.521 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:39.521 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:39.521 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:39.521 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:39.521 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2347702 00:29:39.521 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2347702 ']' 00:29:39.521 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2347702 00:29:39.521 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:39.521 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:39.521 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2347702 00:29:39.521 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:39.521 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:39.521 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2347702' 00:29:39.521 killing process with pid 2347702 00:29:39.521 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2347702 00:29:39.521 Received shutdown signal, test time was about 2.000000 seconds 00:29:39.521 00:29:39.521 Latency(us) 00:29:39.521 [2024-12-05T13:01:11.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.521 [2024-12-05T13:01:11.047Z] =================================================================================================================== 00:29:39.521 [2024-12-05T13:01:11.047Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:39.521 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2347702 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2348110 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2348110 /var/tmp/bperf.sock 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2348110 ']' 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:39.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:39.780 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:39.780 [2024-12-05 14:01:11.300156] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:29:39.780 [2024-12-05 14:01:11.300241] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2348110 ] 00:29:39.780 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:39.780 Zero copy mechanism will not be used. 00:29:40.039 [2024-12-05 14:01:11.365557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.039 [2024-12-05 14:01:11.417249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.039 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:40.039 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:40.039 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:40.039 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:40.039 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:40.605 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:40.605 14:01:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:40.863 nvme0n1 00:29:40.864 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:40.864 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:41.122 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:41.122 Zero copy mechanism will not be used. 00:29:41.122 Running I/O for 2 seconds... 00:29:43.060 5946.00 IOPS, 743.25 MiB/s [2024-12-05T13:01:14.586Z] 6219.50 IOPS, 777.44 MiB/s 00:29:43.060 Latency(us) 00:29:43.060 [2024-12-05T13:01:14.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.060 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:43.060 nvme0n1 : 2.00 6217.81 777.23 0.00 0.00 2566.27 1699.08 10777.03 00:29:43.060 [2024-12-05T13:01:14.586Z] =================================================================================================================== 00:29:43.060 [2024-12-05T13:01:14.586Z] Total : 6217.81 777.23 0.00 0.00 2566.27 1699.08 10777.03 00:29:43.060 { 00:29:43.060 "results": [ 00:29:43.060 { 00:29:43.060 "job": "nvme0n1", 00:29:43.060 "core_mask": "0x2", 00:29:43.060 "workload": "randwrite", 00:29:43.060 "status": "finished", 00:29:43.060 "queue_depth": 16, 00:29:43.060 "io_size": 131072, 00:29:43.060 "runtime": 2.003761, 00:29:43.060 "iops": 6217.807413159553, 00:29:43.060 "mibps": 777.2259266449441, 00:29:43.060 "io_failed": 0, 00:29:43.060 "io_timeout": 0, 00:29:43.060 "avg_latency_us": 2566.2660728374253, 00:29:43.060 "min_latency_us": 1699.0814814814814, 00:29:43.060 "max_latency_us": 10777.031111111111 00:29:43.060 } 00:29:43.060 ], 00:29:43.060 "core_count": 1 00:29:43.060 } 00:29:43.060 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:43.060 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:43.060 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:43.060 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:43.060 | select(.opcode=="crc32c") 00:29:43.060 | "\(.module_name) \(.executed)"' 00:29:43.060 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:43.318 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:43.319 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:43.319 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:43.319 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:43.319 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2348110 00:29:43.319 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2348110 ']' 00:29:43.319 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2348110 00:29:43.319 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:43.319 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.319 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2348110 00:29:43.319 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:43.319 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:43.319 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2348110' 00:29:43.319 killing process with pid 2348110 00:29:43.319 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2348110 00:29:43.319 Received shutdown signal, test time was about 2.000000 seconds 00:29:43.319 00:29:43.319 Latency(us) 00:29:43.319 [2024-12-05T13:01:14.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.319 [2024-12-05T13:01:14.845Z] =================================================================================================================== 00:29:43.319 [2024-12-05T13:01:14.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:43.319 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2348110 00:29:43.579 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2346737 00:29:43.579 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2346737 ']' 00:29:43.579 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2346737 00:29:43.579 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:43.579 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.579 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2346737 00:29:43.579 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:43.579 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:43.579 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2346737' 00:29:43.579 killing process with pid 2346737 00:29:43.579 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2346737 00:29:43.579 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2346737 00:29:43.838 00:29:43.838 real 0m15.812s 00:29:43.838 user 0m31.739s 00:29:43.838 sys 0m4.309s 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:43.838 ************************************ 00:29:43.838 END TEST nvmf_digest_clean 00:29:43.838 ************************************ 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:43.838 ************************************ 00:29:43.838 START TEST nvmf_digest_error 00:29:43.838 ************************************ 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2348673 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2348673 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2348673 ']' 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:43.838 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.097 [2024-12-05 14:01:15.392109] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:29:44.097 [2024-12-05 14:01:15.392188] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.097 [2024-12-05 14:01:15.461962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.097 [2024-12-05 14:01:15.515624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.097 [2024-12-05 14:01:15.515680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.097 [2024-12-05 14:01:15.515709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.097 [2024-12-05 14:01:15.515720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.097 [2024-12-05 14:01:15.515729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.097 [2024-12-05 14:01:15.516293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.097 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.097 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:44.097 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:44.097 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:44.097 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.356 [2024-12-05 14:01:15.641072] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.356 null0 00:29:44.356 [2024-12-05 14:01:15.763586] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.356 [2024-12-05 14:01:15.787811] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2348693 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2348693 /var/tmp/bperf.sock 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2348693 ']' 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:44.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:44.356 14:01:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.356 [2024-12-05 14:01:15.839963] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:29:44.356 [2024-12-05 14:01:15.840037] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2348693 ] 00:29:44.613 [2024-12-05 14:01:15.906637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.613 [2024-12-05 14:01:15.961739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.613 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.613 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:44.613 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:44.613 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:44.870 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:44.870 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.870 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.870 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.870 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:44.870 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:45.439 nvme0n1 00:29:45.439 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:45.439 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.439 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:45.439 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.439 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:45.439 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:45.698 Running I/O for 2 seconds... 00:29:45.698 [2024-12-05 14:01:16.996885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.698 [2024-12-05 14:01:16.996946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.698 [2024-12-05 14:01:16.996966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.698 [2024-12-05 14:01:17.010528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.698 [2024-12-05 14:01:17.010561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.698 [2024-12-05 14:01:17.010594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.698 [2024-12-05 14:01:17.027030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.698 [2024-12-05 14:01:17.027061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.698 [2024-12-05 14:01:17.027093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.698 [2024-12-05 14:01:17.037988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.698 [2024-12-05 14:01:17.038019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.698 [2024-12-05 14:01:17.038053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.698 [2024-12-05 14:01:17.054076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.698 [2024-12-05 14:01:17.054115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.698 [2024-12-05 14:01:17.054149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.698 [2024-12-05 14:01:17.067989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.698 [2024-12-05 14:01:17.068020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.698 [2024-12-05 14:01:17.068053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.698 [2024-12-05 14:01:17.078267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.698 [2024-12-05 14:01:17.078295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.698 [2024-12-05 14:01:17.078325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.698 [2024-12-05 14:01:17.094257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.698 [2024-12-05 14:01:17.094286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.698 [2024-12-05 14:01:17.094317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.698 [2024-12-05 14:01:17.110671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.698 [2024-12-05 14:01:17.110701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.698 [2024-12-05 14:01:17.110732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.698 [2024-12-05 14:01:17.123732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.698 [2024-12-05 14:01:17.123778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.698 [2024-12-05 14:01:17.123795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.698 [2024-12-05 14:01:17.138520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.699 [2024-12-05 14:01:17.138552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.699 [2024-12-05 14:01:17.138569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.699 [2024-12-05 14:01:17.153949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.699 [2024-12-05 14:01:17.153978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.699 [2024-12-05 14:01:17.154008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.699 [2024-12-05 14:01:17.164290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.699 [2024-12-05 14:01:17.164319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.699 [2024-12-05 14:01:17.164349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.699 [2024-12-05 14:01:17.179716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.699 [2024-12-05 14:01:17.179749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.699 [2024-12-05 14:01:17.179782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.699 [2024-12-05 14:01:17.195830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.699 [2024-12-05 14:01:17.195863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.699 [2024-12-05 14:01:17.195880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.699 [2024-12-05 14:01:17.208215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.699 [2024-12-05 14:01:17.208246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.699 [2024-12-05 14:01:17.208279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.699 [2024-12-05 14:01:17.221150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.699 [2024-12-05 14:01:17.221180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.699 [2024-12-05 14:01:17.221196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.958 [2024-12-05 14:01:17.236392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.958 [2024-12-05 14:01:17.236451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.958 [2024-12-05 14:01:17.236469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.958 [2024-12-05 14:01:17.251295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.958 [2024-12-05 14:01:17.251326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.958 [2024-12-05 14:01:17.251359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.958 [2024-12-05 14:01:17.267652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.958 [2024-12-05 14:01:17.267685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.958 [2024-12-05 14:01:17.267728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.958 [2024-12-05 14:01:17.284836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.958 [2024-12-05 14:01:17.284864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.958 [2024-12-05 14:01:17.284894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.958 [2024-12-05 14:01:17.299023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.958 [2024-12-05 14:01:17.299053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.958 [2024-12-05 14:01:17.299092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.958 [2024-12-05 14:01:17.310648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.958 [2024-12-05 14:01:17.310679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.958 [2024-12-05 14:01:17.310713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.958 [2024-12-05 14:01:17.325412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.958 [2024-12-05 14:01:17.325465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.958 [2024-12-05 14:01:17.325483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.958 [2024-12-05 14:01:17.338625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.958 [2024-12-05 14:01:17.338656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.958 [2024-12-05 14:01:17.338673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.958 [2024-12-05 14:01:17.350158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.958 [2024-12-05 14:01:17.350186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.958 [2024-12-05 14:01:17.350216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.959 [2024-12-05 14:01:17.366347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.959 [2024-12-05 14:01:17.366374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.959 [2024-12-05 14:01:17.366406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.959 [2024-12-05 14:01:17.381716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.959 [2024-12-05 14:01:17.381763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.959 [2024-12-05 14:01:17.381779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.959 [2024-12-05 14:01:17.395829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.959 [2024-12-05 14:01:17.395856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.959 [2024-12-05 14:01:17.395888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.959 [2024-12-05 14:01:17.409306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.959 [2024-12-05 14:01:17.409333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.959 [2024-12-05 14:01:17.409363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.959 [2024-12-05 14:01:17.423549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.959 [2024-12-05 14:01:17.423586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.959 [2024-12-05 14:01:17.423604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.959 [2024-12-05 14:01:17.436073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.959 [2024-12-05 14:01:17.436118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.959 [2024-12-05 14:01:17.436134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.959 [2024-12-05 14:01:17.448973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.959 [2024-12-05 14:01:17.449019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.959 [2024-12-05 14:01:17.449035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.959 [2024-12-05 14:01:17.461651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.959 [2024-12-05 14:01:17.461679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.959 [2024-12-05 14:01:17.461695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.959 [2024-12-05 14:01:17.474269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:45.959 [2024-12-05 14:01:17.474297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.959 [2024-12-05 14:01:17.474327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.216 [2024-12-05 14:01:17.484920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.216 [2024-12-05 14:01:17.484951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.216 [2024-12-05 14:01:17.484982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.500651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.500682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.500699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.515933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.515961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.515992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.532167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.532199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.532232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.546142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.546172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.546205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.558483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.558511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.558527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.571487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.571516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.571532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.587924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.587952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.587983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.600261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.600290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.600322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.616522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.616554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.616572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.628965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.628994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.629024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.641974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.642002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.642033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.657109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.657147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.657179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.670532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.670562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.670579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.683065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.683112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.683129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.695826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.695855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.695885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.708589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.708618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.708635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.720519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.720546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.720576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.217 [2024-12-05 14:01:17.734955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.217 [2024-12-05 14:01:17.734985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.217 [2024-12-05 14:01:17.735017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.477 [2024-12-05 14:01:17.747696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.477 [2024-12-05 14:01:17.747728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.477 [2024-12-05 14:01:17.747745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.477 [2024-12-05 14:01:17.761056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.477 [2024-12-05 14:01:17.761086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.477 [2024-12-05 14:01:17.761118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.477 [2024-12-05 14:01:17.773807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.477 [2024-12-05 14:01:17.773834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.477 [2024-12-05 14:01:17.773865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.477 [2024-12-05 14:01:17.785641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.477 [2024-12-05 14:01:17.785669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.477 [2024-12-05 14:01:17.785684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.477 [2024-12-05 14:01:17.800583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.477 [2024-12-05 14:01:17.800611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.477 [2024-12-05 14:01:17.800627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.477 [2024-12-05 14:01:17.815329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.477 [2024-12-05 14:01:17.815358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.477 [2024-12-05 14:01:17.815388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.477 [2024-12-05 14:01:17.826141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.477 [2024-12-05 14:01:17.826168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.477 [2024-12-05 14:01:17.826198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.477 [2024-12-05 14:01:17.842347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.477 [2024-12-05 14:01:17.842375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.477 [2024-12-05 14:01:17.842406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.477 [2024-12-05 14:01:17.855735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.477 [2024-12-05 14:01:17.855763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.477 [2024-12-05 14:01:17.855794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.477 [2024-12-05 14:01:17.869055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.477 [2024-12-05 14:01:17.869085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.477 [2024-12-05 14:01:17.869118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.477 [2024-12-05 14:01:17.885985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.477 [2024-12-05 14:01:17.886031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.477 [2024-12-05 14:01:17.886055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.478 [2024-12-05 14:01:17.899046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.478 [2024-12-05 14:01:17.899076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.478 [2024-12-05 14:01:17.899109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.478 [2024-12-05 14:01:17.911074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.478 [2024-12-05 14:01:17.911122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.478 [2024-12-05 14:01:17.911138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.478 [2024-12-05 14:01:17.925257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.478 [2024-12-05 14:01:17.925284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.478 [2024-12-05 14:01:17.925314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.478 [2024-12-05 14:01:17.937733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.478 [2024-12-05 14:01:17.937775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.478 [2024-12-05 14:01:17.937790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.478 [2024-12-05 14:01:17.950316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.478 [2024-12-05 14:01:17.950344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.478 [2024-12-05 14:01:17.950374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.478 [2024-12-05 14:01:17.963234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.478 [2024-12-05 14:01:17.963263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.478 [2024-12-05 14:01:17.963296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.478 [2024-12-05 14:01:17.975563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.478 [2024-12-05 14:01:17.975593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.478 [2024-12-05 14:01:17.975610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.478 18471.00 IOPS, 72.15 MiB/s [2024-12-05T13:01:18.004Z] [2024-12-05 14:01:17.989842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.478 [2024-12-05 14:01:17.989872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.478 [2024-12-05 14:01:17.989906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.478 [2024-12-05 14:01:18.000979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.478 [2024-12-05 14:01:18.001012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.478 [2024-12-05 14:01:18.001044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.016090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.016120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.016152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.031070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.031099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.031131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.047241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.047272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.047289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.060325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.060356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.060388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.074477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.074508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.074525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.085837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.085865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.085895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.100658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.100688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.100704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.114996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.115025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.115041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.127425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.127454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.127486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.141943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.141972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.142005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.153125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.153152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.153182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.169660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.169689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.169720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.183244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.183274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.183306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.194764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.194791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.194821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.209898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.737 [2024-12-05 14:01:18.209929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.737 [2024-12-05 14:01:18.209946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.737 [2024-12-05 14:01:18.222873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.738 [2024-12-05 14:01:18.222901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.738 [2024-12-05 14:01:18.222931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.738 [2024-12-05 14:01:18.236338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.738 [2024-12-05 14:01:18.236381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.738 [2024-12-05 14:01:18.236403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.738 [2024-12-05 14:01:18.249517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.738 [2024-12-05 14:01:18.249547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.738 [2024-12-05 14:01:18.249562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.738 [2024-12-05 14:01:18.261908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.738 [2024-12-05 14:01:18.261954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.738 [2024-12-05 14:01:18.261971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.274516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.274548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.274565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.289799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.289827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.289857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.305140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.305168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.305199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.318138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.318167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.318183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.331931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.331975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.331991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.347833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.347878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.347895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.359668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.359712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.359728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.372538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.372566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.372598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.384870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.384898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.384928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.397855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.397900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.397918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.411056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.411086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.411119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.424066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.424094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.424123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.437123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.437154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.437186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.449166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.449193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.449225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.462936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.462963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.998 [2024-12-05 14:01:18.462999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.998 [2024-12-05 14:01:18.478394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.998 [2024-12-05 14:01:18.478450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.999 [2024-12-05 14:01:18.478470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.999 [2024-12-05 14:01:18.492213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.999 [2024-12-05 14:01:18.492240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.999 [2024-12-05 14:01:18.492269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.999 [2024-12-05 14:01:18.505991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.999 [2024-12-05 14:01:18.506037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.999 [2024-12-05 14:01:18.506055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.999 [2024-12-05 14:01:18.519100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:46.999 [2024-12-05 14:01:18.519130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.999 [2024-12-05 14:01:18.519162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.534867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.534900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.534933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.545079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.545106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.545138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.559732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.559776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.559792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.575412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.575448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.575465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.589788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.589837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.589855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.603052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.603083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.603115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.615480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.615510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.615526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.629653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.629683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.629714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.643285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.643316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.643348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.653912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.653940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.653971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.668678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.668710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.668741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.685766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.685794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.685824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.695991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.696018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.696049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.711741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.711771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.711803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.726452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.726483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.726501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.738333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.738361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.738392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.754058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.754088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.754104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.767195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.767239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.767256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.260 [2024-12-05 14:01:18.782936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.260 [2024-12-05 14:01:18.782980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.260 [2024-12-05 14:01:18.782995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.521 [2024-12-05 14:01:18.799228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.521 [2024-12-05 14:01:18.799256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.521 [2024-12-05 14:01:18.799287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.521 [2024-12-05 14:01:18.810085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.521 [2024-12-05 14:01:18.810113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.521 [2024-12-05 14:01:18.810143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.521 [2024-12-05 14:01:18.825104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.521 [2024-12-05 14:01:18.825132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.521 [2024-12-05 14:01:18.825169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.521 [2024-12-05 14:01:18.840953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.521 [2024-12-05 14:01:18.840981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.521 [2024-12-05 14:01:18.841011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.521 [2024-12-05 14:01:18.857963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.521 [2024-12-05 14:01:18.857993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.521 [2024-12-05 14:01:18.858024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.521 [2024-12-05 14:01:18.872840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.521 [2024-12-05 14:01:18.872867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.521 [2024-12-05 14:01:18.872898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.521 [2024-12-05 14:01:18.889238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.521 [2024-12-05 14:01:18.889268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.522 [2024-12-05 14:01:18.889300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.522 [2024-12-05 14:01:18.900615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.522 [2024-12-05 14:01:18.900660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.522 [2024-12-05 14:01:18.900677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.522 [2024-12-05 14:01:18.914037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.522 [2024-12-05 14:01:18.914065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.522 [2024-12-05 14:01:18.914095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.522 [2024-12-05 14:01:18.930598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.522 [2024-12-05 14:01:18.930631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.522 [2024-12-05 14:01:18.930648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.522 [2024-12-05 14:01:18.943920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.522 [2024-12-05 14:01:18.943948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.522 [2024-12-05 14:01:18.943978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.522 [2024-12-05 14:01:18.957192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.522 [2024-12-05 14:01:18.957228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.522 [2024-12-05 14:01:18.957245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.522 [2024-12-05 14:01:18.969593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.522 [2024-12-05 14:01:18.969621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.522 [2024-12-05 14:01:18.969637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.522 [2024-12-05 14:01:18.985619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7187e0) 00:29:47.522 [2024-12-05 14:01:18.985647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.522 [2024-12-05 14:01:18.985663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.522 18477.50 IOPS, 72.18 MiB/s 00:29:47.522 Latency(us) 00:29:47.522 [2024-12-05T13:01:19.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.522 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:47.522 nvme0n1 : 2.00 18494.70 72.24 0.00 0.00 6912.48 3543.80 23787.14 00:29:47.522 [2024-12-05T13:01:19.048Z] =================================================================================================================== 00:29:47.522 [2024-12-05T13:01:19.048Z] Total : 18494.70 72.24 0.00 0.00 6912.48 3543.80 23787.14 00:29:47.522 { 00:29:47.522 "results": [ 00:29:47.522 { 00:29:47.522 "job": "nvme0n1", 00:29:47.522 "core_mask": "0x2", 00:29:47.522 "workload": "randread", 00:29:47.522 "status": "finished", 00:29:47.522 "queue_depth": 128, 00:29:47.522 "io_size": 4096, 00:29:47.522 "runtime": 2.004142, 00:29:47.522 "iops": 18494.69748151578, 00:29:47.522 "mibps": 72.24491203717102, 00:29:47.522 "io_failed": 0, 00:29:47.522 "io_timeout": 0, 00:29:47.522 "avg_latency_us": 6912.484485132627, 00:29:47.522 "min_latency_us": 3543.7985185185184, 00:29:47.522 "max_latency_us": 23787.140740740742 00:29:47.522 } 00:29:47.522 ], 00:29:47.522 "core_count": 1 00:29:47.522 } 00:29:47.522 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:47.522 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:47.522 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:47.522 | .driver_specific 00:29:47.522 | .nvme_error 00:29:47.522 | .status_code 00:29:47.522 | .command_transient_transport_error' 00:29:47.522 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:47.782 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:29:47.782 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2348693 00:29:47.782 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2348693 ']' 00:29:47.782 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2348693 00:29:47.782 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:47.782 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.782 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2348693 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2348693' 00:29:48.041 killing process with pid 2348693 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2348693 00:29:48.041 Received shutdown signal, test time was about 2.000000 seconds 00:29:48.041 00:29:48.041 Latency(us) 00:29:48.041 [2024-12-05T13:01:19.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.041 [2024-12-05T13:01:19.567Z] =================================================================================================================== 00:29:48.041 [2024-12-05T13:01:19.567Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2348693 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2349220 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2349220 /var/tmp/bperf.sock 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2349220 ']' 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:48.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.041 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:48.302 [2024-12-05 14:01:19.577013] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:29:48.302 [2024-12-05 14:01:19.577091] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2349220 ] 00:29:48.302 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:48.302 Zero copy mechanism will not be used. 00:29:48.302 [2024-12-05 14:01:19.645210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.302 [2024-12-05 14:01:19.706034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.302 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.302 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:48.302 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:48.561 14:01:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:48.820 14:01:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:48.820 14:01:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.820 14:01:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:48.820 14:01:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.820 14:01:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:48.820 14:01:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:49.388 nvme0n1 00:29:49.388 14:01:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:49.388 14:01:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.388 14:01:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.388 14:01:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.388 14:01:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:49.388 14:01:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:49.388 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:49.388 Zero copy mechanism will not be used. 00:29:49.388 Running I/O for 2 seconds... 00:29:49.388 [2024-12-05 14:01:20.898511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.388 [2024-12-05 14:01:20.898587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-05 14:01:20.898607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.388 [2024-12-05 14:01:20.905012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.388 [2024-12-05 14:01:20.905048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-05 14:01:20.905067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.388 [2024-12-05 14:01:20.911774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.388 [2024-12-05 14:01:20.911808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-05 14:01:20.911826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.650 [2024-12-05 14:01:20.919566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.650 [2024-12-05 14:01:20.919599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.650 [2024-12-05 14:01:20.919617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:20.927856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:20.927887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:20.927930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:20.935600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:20.935633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:20.935650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:20.942436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:20.942468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:20.942485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:20.948001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:20.948032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:20.948051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:20.952894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:20.952925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:20.952942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:20.955939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:20.955969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:20.955986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:20.961796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:20.961842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:20.961861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:20.968007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:20.968040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:20.968059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:20.976168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:20.976215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:20.976232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:20.984547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:20.984589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:20.984608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:20.992432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:20.992489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:20.992507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.000317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.000349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.000366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.008674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.008707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.008725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.016471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.016503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.016520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.024517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.024549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.024569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.032455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.032487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.032506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.040045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.040077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.040095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.047682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.047714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.047732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.055447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.055479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.055497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.062339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.062371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.062389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.069856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.069887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.069906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.077562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.077594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.077611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.084644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.084675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.084693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.090258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.090289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.090306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.094905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.094935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.094952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.099538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.651 [2024-12-05 14:01:21.099568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.651 [2024-12-05 14:01:21.099586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.651 [2024-12-05 14:01:21.104165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.104195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.104218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.108636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.108666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.108683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.113222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.113252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.113268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.117673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.117702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.117719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.122203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.122233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.122249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.126706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.126736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.126752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.131345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.131374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.131390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.135891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.135921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.135937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.140462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.140491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.140508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.144950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.144985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.145002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.149551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.149580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.149597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.154094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.154123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.154139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.158601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.158630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.158648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.163198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.163227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.163243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.167748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.167776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.167793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.652 [2024-12-05 14:01:21.172390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.652 [2024-12-05 14:01:21.172427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.652 [2024-12-05 14:01:21.172447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.176991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.177022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.177038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.181601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.181630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.181648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.186176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.186206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.186222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.190655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.190685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.190702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.195294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.195323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.195340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.198676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.198706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.198724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.202290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.202320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.202337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.207398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.207436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.207454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.212796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.212842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.212859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.218187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.218217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.218235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.223554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.223585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.223610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.228813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.228858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.228875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.234045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.234090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.234107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.239381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.239413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.239440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.242755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.242800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.242818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.247379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.247430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.914 [2024-12-05 14:01:21.247450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.914 [2024-12-05 14:01:21.252979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.914 [2024-12-05 14:01:21.253011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.253028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.260136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.260167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.260199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.267813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.267860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.267877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.274551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.274590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.274624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.281064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.281095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.281114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.286024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.286055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.286072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.290769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.290812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.290829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.295632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.295664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.295682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.300878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.300909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.300942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.306096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.306128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.306146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.311712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.311743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.311760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.318594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.318625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.318643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.325396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.325434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.325452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.331631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.331663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.331681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.337379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.337410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.337437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.342321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.342351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.342368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.347348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.347378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.347395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.352361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.352392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.352409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.357516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.357547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.357564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.362397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.362436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.362455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.367121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.367151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.367175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.371566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.371596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.371612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.376084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.376114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.376131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.380536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.380565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.380582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.385023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.385053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.385069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.389560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.389590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.389606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.394037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.394066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.394083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.398539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.398569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.398585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.403102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.403132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.915 [2024-12-05 14:01:21.403149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.915 [2024-12-05 14:01:21.407584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.915 [2024-12-05 14:01:21.407614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.916 [2024-12-05 14:01:21.407631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.916 [2024-12-05 14:01:21.413041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.916 [2024-12-05 14:01:21.413071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.916 [2024-12-05 14:01:21.413089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.916 [2024-12-05 14:01:21.419631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.916 [2024-12-05 14:01:21.419661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.916 [2024-12-05 14:01:21.419679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.916 [2024-12-05 14:01:21.426840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.916 [2024-12-05 14:01:21.426871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.916 [2024-12-05 14:01:21.426889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.916 [2024-12-05 14:01:21.432731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:49.916 [2024-12-05 14:01:21.432762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.916 [2024-12-05 14:01:21.432779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.176 [2024-12-05 14:01:21.438915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.176 [2024-12-05 14:01:21.438946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.176 [2024-12-05 14:01:21.438964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.176 [2024-12-05 14:01:21.445249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.176 [2024-12-05 14:01:21.445281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.176 [2024-12-05 14:01:21.445299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.176 [2024-12-05 14:01:21.451087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.176 [2024-12-05 14:01:21.451119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.176 [2024-12-05 14:01:21.451136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.176 [2024-12-05 14:01:21.456961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.176 [2024-12-05 14:01:21.456993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.176 [2024-12-05 14:01:21.457019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.176 [2024-12-05 14:01:21.462337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.176 [2024-12-05 14:01:21.462371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.176 [2024-12-05 14:01:21.462389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.176 [2024-12-05 14:01:21.467461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.176 [2024-12-05 14:01:21.467491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.176 [2024-12-05 14:01:21.467509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.176 [2024-12-05 14:01:21.473224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.176 [2024-12-05 14:01:21.473255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.176 [2024-12-05 14:01:21.473272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.176 [2024-12-05 14:01:21.479130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.176 [2024-12-05 14:01:21.479161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.479179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.484518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.484549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.484567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.489218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.489249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.489266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.493836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.493866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.493884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.499125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.499156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.499173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.504570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.504608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.504626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.509649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.509680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.509698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.515540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.515571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.515588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.520118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.520147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.520164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.524762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.524792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.524808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.529927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.529957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.529974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.534915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.534945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.534962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.539501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.539530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.539547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.544072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.544101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.544117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.548636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.548666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.548683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.553257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.553286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.553302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.557751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.557781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.557798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.562468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.562497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.562514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.567044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.567073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.567090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.571858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.571886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.571919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.576397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.576433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.576452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.580964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.580994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.581011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.585427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.585456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.585479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.589978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.590007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.590025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.594366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.594395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.594412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.598883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.598913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.598931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.603459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.603489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.177 [2024-12-05 14:01:21.603506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.177 [2024-12-05 14:01:21.608478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.177 [2024-12-05 14:01:21.608508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.608525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.613725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.613755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.613772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.619106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.619137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.619154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.624823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.624854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.624872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.630851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.630889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.630908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.635626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.635657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.635675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.638869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.638899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.638917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.643989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.644019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.644037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.649538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.649569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.649586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.655598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.655643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.655659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.661249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.661280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.661297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.666624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.666655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.666672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.671977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.672008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.672025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.678293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.678325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.678342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.683781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.683814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.683831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.688552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.688584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.688601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.693197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.693228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.693245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.178 [2024-12-05 14:01:21.697788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.178 [2024-12-05 14:01:21.697818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.178 [2024-12-05 14:01:21.697835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.439 [2024-12-05 14:01:21.702932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.439 [2024-12-05 14:01:21.702964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.439 [2024-12-05 14:01:21.702982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.439 [2024-12-05 14:01:21.708884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.439 [2024-12-05 14:01:21.708917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.439 [2024-12-05 14:01:21.708934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.439 [2024-12-05 14:01:21.714981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.439 [2024-12-05 14:01:21.715028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.439 [2024-12-05 14:01:21.715046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.439 [2024-12-05 14:01:21.722376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.439 [2024-12-05 14:01:21.722430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.439 [2024-12-05 14:01:21.722471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.730362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.730408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.730449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.738307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.738339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.738372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.746508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.746541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.746558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.754574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.754607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.754624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.762815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.762860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.762877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.771064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.771109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.771125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.779071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.779117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.779134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.787233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.787280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.787298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.795578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.795610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.795628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.803686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.803718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.803736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.811538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.811570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.811588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.819401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.819444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.819463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.827379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.827411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.827437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.835006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.835038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.835055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.841000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.841032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.841049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.846098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.846129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.846147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.851816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.851846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.851870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.857633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.857663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.857680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.862347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.862376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.862393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.866812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.866842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.866858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.871359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.871389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.871406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.875882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.875912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.875929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.880350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.880380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.880397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.885031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.885060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.885077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.889635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.889665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.889681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.440 [2024-12-05 14:01:21.894313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.440 [2024-12-05 14:01:21.894349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.440 [2024-12-05 14:01:21.894368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.440 5517.00 IOPS, 689.62 MiB/s [2024-12-05T13:01:21.966Z] [2024-12-05 14:01:21.900118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.900148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.900166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.441 [2024-12-05 14:01:21.904731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.904761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.904778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.441 [2024-12-05 14:01:21.909306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.909336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.909353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.441 [2024-12-05 14:01:21.912081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.912110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.912126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.441 [2024-12-05 14:01:21.916616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.916647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.916664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.441 [2024-12-05 14:01:21.921444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.921490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.921518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.441 [2024-12-05 14:01:21.926374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.926427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.926446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.441 [2024-12-05 14:01:21.931062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.931106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.931125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.441 [2024-12-05 14:01:21.935629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.935658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.935675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.441 [2024-12-05 14:01:21.940285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.940314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.940331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.441 [2024-12-05 14:01:21.944810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.944840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.944872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.441 [2024-12-05 14:01:21.949340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.949371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.949388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.441 [2024-12-05 14:01:21.953854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.953884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.953901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.441 [2024-12-05 14:01:21.958427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.958456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.958473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.441 [2024-12-05 14:01:21.963023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.441 [2024-12-05 14:01:21.963052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.441 [2024-12-05 14:01:21.963068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:21.967549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:21.967579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:21.967596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:21.972067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:21.972095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:21.972135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:21.976608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:21.976638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:21.976654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:21.981075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:21.981103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:21.981136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:21.985577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:21.985606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:21.985622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:21.990692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:21.990736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:21.990753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:21.994472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:21.994502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:21.994518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:21.998929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:21.998959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:21.998991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.004238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.004266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.004298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.008170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.008198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.008230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.013518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.013568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.013585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.017472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.017501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.017532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.022140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.022168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.022198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.026809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.026836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.026868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.031545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.031588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.031605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.036290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.036333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.036349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.041078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.041120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.041135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.046041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.046070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.046102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.050768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.050796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.050833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.056082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.056110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.056142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.061880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.061925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.061942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.068144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.068173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.068205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.073375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.703 [2024-12-05 14:01:22.073426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.703 [2024-12-05 14:01:22.073445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.703 [2024-12-05 14:01:22.078863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.078891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.078923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.083983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.084012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.084043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.089358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.089388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.089429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.094632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.094664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.094681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.099855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.099902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.099921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.105891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.105935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.105953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.110398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.110450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.110468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.115857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.115901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.115918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.120677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.120721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.120738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.125873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.125903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.125932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.131190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.131220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.131252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.137309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.137338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.137371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.142204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.142249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.142265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.147964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.147994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.148011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.153495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.153526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.153544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.158472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.158502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.158533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.163209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.163237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.163268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.168213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.168258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.168275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.174041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.174085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.174101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.180158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.180188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.180220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.185909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.185952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.185969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.191294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.191337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.191360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.196850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.196880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.196912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.203164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.203194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.203227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.208030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.208061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.208078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.211998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.212028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.212044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.216673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.704 [2024-12-05 14:01:22.216703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.704 [2024-12-05 14:01:22.216719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.704 [2024-12-05 14:01:22.221490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.705 [2024-12-05 14:01:22.221520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.705 [2024-12-05 14:01:22.221536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.705 [2024-12-05 14:01:22.226246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.705 [2024-12-05 14:01:22.226276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.705 [2024-12-05 14:01:22.226294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.231875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.231906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.231924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.236677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.236713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.236730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.241449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.241479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.241496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.246233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.246263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.246280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.250768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.250798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.250816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.255696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.255726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.255761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.262032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.262061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.262093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.269748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.269780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.269815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.277380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.277434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.277453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.285758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.285789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.285806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.293978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.294009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.294041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.301513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.301545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.301563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.309013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.309044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.309076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.317507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.317547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.317565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.325875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.325906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.325938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.333777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.333809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.333826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.341940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.341972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.341990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.349909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.349940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.349957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.357920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.357951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.357975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.366398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.366437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.366455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.374651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.374682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.374700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.966 [2024-12-05 14:01:22.381324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.966 [2024-12-05 14:01:22.381355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.966 [2024-12-05 14:01:22.381373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.386958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.386990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.387007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.391851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.391882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.391899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.396451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.396480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.396497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.401696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.401727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.401745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.407015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.407046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.407064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.412339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.412379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.412397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.417506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.417536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.417554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.423220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.423250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.423267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.427806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.427835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.427851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.432475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.432504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.432521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.437772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.437803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.437820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.442934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.442965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.442982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.449125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.449156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.449173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.456767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.456798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.456831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.464335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.464382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.464400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.470552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.470583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.470601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.476135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.476166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.476183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.481589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.481620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.481637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.967 [2024-12-05 14:01:22.486172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:50.967 [2024-12-05 14:01:22.486202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.967 [2024-12-05 14:01:22.486218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.229 [2024-12-05 14:01:22.490818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.229 [2024-12-05 14:01:22.490849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.229 [2024-12-05 14:01:22.490866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.229 [2024-12-05 14:01:22.495611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.229 [2024-12-05 14:01:22.495641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.229 [2024-12-05 14:01:22.495658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.229 [2024-12-05 14:01:22.500502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.229 [2024-12-05 14:01:22.500532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.229 [2024-12-05 14:01:22.500549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.229 [2024-12-05 14:01:22.505396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.229 [2024-12-05 14:01:22.505434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.229 [2024-12-05 14:01:22.505459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.229 [2024-12-05 14:01:22.510198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.229 [2024-12-05 14:01:22.510229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.229 [2024-12-05 14:01:22.510245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.229 [2024-12-05 14:01:22.514874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.229 [2024-12-05 14:01:22.514903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.229 [2024-12-05 14:01:22.514920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.229 [2024-12-05 14:01:22.520198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.229 [2024-12-05 14:01:22.520229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.229 [2024-12-05 14:01:22.520246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.229 [2024-12-05 14:01:22.525542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.229 [2024-12-05 14:01:22.525573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.229 [2024-12-05 14:01:22.525590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.229 [2024-12-05 14:01:22.530218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.229 [2024-12-05 14:01:22.530263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.229 [2024-12-05 14:01:22.530279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.229 [2024-12-05 14:01:22.535235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.229 [2024-12-05 14:01:22.535265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.229 [2024-12-05 14:01:22.535282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.229 [2024-12-05 14:01:22.540915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.229 [2024-12-05 14:01:22.540945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.229 [2024-12-05 14:01:22.540963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.229 [2024-12-05 14:01:22.547098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.229 [2024-12-05 14:01:22.547130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.229 [2024-12-05 14:01:22.547147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.229 [2024-12-05 14:01:22.553006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.229 [2024-12-05 14:01:22.553053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.229 [2024-12-05 14:01:22.553070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.229 [2024-12-05 14:01:22.558198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.558229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.558246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.561174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.561202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.561232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.566439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.566470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.566488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.572659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.572690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.572707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.577058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.577103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.577119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.582503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.582533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.582551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.587686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.587714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.587729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.592862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.592891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.592929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.597727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.597756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.597772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.603184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.603229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.603246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.608703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.608735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.608752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.616650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.616682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.616715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.624239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.624284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.624301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.632190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.632221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.632255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.639296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.639341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.639358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.646930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.646960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.646994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.653553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.653603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.653622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.658986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.659030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.659046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.663940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.663970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.664003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.669201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.669231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.669262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.675281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.675313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.675331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.681406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.681444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.681463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.686893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.686925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.686942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.694128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.694159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.694191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.701439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.701470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.701488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.708332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.708364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.708382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.230 [2024-12-05 14:01:22.714635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.230 [2024-12-05 14:01:22.714666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.230 [2024-12-05 14:01:22.714683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.231 [2024-12-05 14:01:22.721749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.231 [2024-12-05 14:01:22.721781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.231 [2024-12-05 14:01:22.721798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.231 [2024-12-05 14:01:22.729904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.231 [2024-12-05 14:01:22.729935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.231 [2024-12-05 14:01:22.729953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.231 [2024-12-05 14:01:22.736968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.231 [2024-12-05 14:01:22.736999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.231 [2024-12-05 14:01:22.737017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.231 [2024-12-05 14:01:22.743304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.231 [2024-12-05 14:01:22.743334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.231 [2024-12-05 14:01:22.743352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.231 [2024-12-05 14:01:22.749307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.231 [2024-12-05 14:01:22.749339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.231 [2024-12-05 14:01:22.749356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.754601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.754633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.754650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.759377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.759408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.759441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.764131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.764160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.764177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.768810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.768839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.768856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.774613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.774644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.774661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.780075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.780106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.780123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.785716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.785747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.785764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.791486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.791518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.791535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.796744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.796774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.796806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.801274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.801319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.801337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.804443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.804493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.804511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.809839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.809868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.809900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.814779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.814810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.814827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.819741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.819786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.819803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.825140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.825170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.825202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.830908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.830940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.830956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.836850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.836882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.836899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.842162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.842193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.842226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.491 [2024-12-05 14:01:22.847674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.491 [2024-12-05 14:01:22.847705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.491 [2024-12-05 14:01:22.847722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.492 [2024-12-05 14:01:22.853681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.492 [2024-12-05 14:01:22.853713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.492 [2024-12-05 14:01:22.853732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.492 [2024-12-05 14:01:22.858375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.492 [2024-12-05 14:01:22.858405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.492 [2024-12-05 14:01:22.858431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.492 [2024-12-05 14:01:22.863203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.492 [2024-12-05 14:01:22.863234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.492 [2024-12-05 14:01:22.863255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.492 [2024-12-05 14:01:22.868764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.492 [2024-12-05 14:01:22.868795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.492 [2024-12-05 14:01:22.868811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.492 [2024-12-05 14:01:22.873501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.492 [2024-12-05 14:01:22.873532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.492 [2024-12-05 14:01:22.873548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.492 [2024-12-05 14:01:22.878254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.492 [2024-12-05 14:01:22.878287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.492 [2024-12-05 14:01:22.878304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.492 [2024-12-05 14:01:22.882823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.492 [2024-12-05 14:01:22.882853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.492 [2024-12-05 14:01:22.882870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:51.492 [2024-12-05 14:01:22.888979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.492 [2024-12-05 14:01:22.889010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.492 [2024-12-05 14:01:22.889028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.492 [2024-12-05 14:01:22.894070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.492 [2024-12-05 14:01:22.894117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.492 [2024-12-05 14:01:22.894161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:51.492 [2024-12-05 14:01:22.899467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x871c20) 00:29:51.492 [2024-12-05 14:01:22.899499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.492 [2024-12-05 14:01:22.899517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:51.492 5570.50 IOPS, 696.31 MiB/s 00:29:51.492 Latency(us) 00:29:51.492 [2024-12-05T13:01:23.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.492 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:51.492 nvme0n1 : 2.00 5569.80 696.22 0.00 0.00 2868.58 697.84 8641.04 00:29:51.492 [2024-12-05T13:01:23.018Z] =================================================================================================================== 00:29:51.492 [2024-12-05T13:01:23.018Z] Total : 5569.80 696.22 0.00 0.00 2868.58 697.84 8641.04 00:29:51.492 { 00:29:51.492 "results": [ 00:29:51.492 { 00:29:51.492 "job": "nvme0n1", 00:29:51.492 "core_mask": "0x2", 00:29:51.492 "workload": "randread", 00:29:51.492 "status": "finished", 00:29:51.492 "queue_depth": 16, 00:29:51.492 "io_size": 131072, 00:29:51.492 "runtime": 2.003305, 00:29:51.492 "iops": 5569.795912254998, 00:29:51.492 "mibps": 696.2244890318748, 00:29:51.492 "io_failed": 0, 00:29:51.492 "io_timeout": 0, 00:29:51.492 "avg_latency_us": 2868.5824370489863, 00:29:51.492 "min_latency_us": 697.837037037037, 00:29:51.492 "max_latency_us": 8641.042962962963 00:29:51.492 } 00:29:51.492 ], 00:29:51.492 "core_count": 1 00:29:51.492 } 00:29:51.492 14:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:51.492 14:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:51.492 14:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:51.492 | .driver_specific 00:29:51.492 | .nvme_error 00:29:51.492 | .status_code 00:29:51.492 | .command_transient_transport_error' 00:29:51.492 14:01:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:51.751 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 360 > 0 )) 00:29:51.751 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2349220 00:29:51.751 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2349220 ']' 00:29:51.751 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2349220 00:29:51.751 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:51.751 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:51.751 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2349220 00:29:51.751 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:51.751 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:51.751 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2349220' 00:29:51.751 killing process with pid 2349220 00:29:51.751 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2349220 00:29:51.751 Received shutdown signal, test time was about 2.000000 seconds 00:29:51.751 00:29:51.751 Latency(us) 00:29:51.751 [2024-12-05T13:01:23.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.751 [2024-12-05T13:01:23.277Z] =================================================================================================================== 00:29:51.751 [2024-12-05T13:01:23.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:51.751 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2349220 00:29:52.008 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:52.008 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:52.008 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:52.008 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:52.008 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:52.008 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2349635 00:29:52.008 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:52.008 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2349635 /var/tmp/bperf.sock 00:29:52.008 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2349635 ']' 00:29:52.008 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:52.008 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:52.008 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:52.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:52.008 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:52.008 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.008 [2024-12-05 14:01:23.483098] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:29:52.008 [2024-12-05 14:01:23.483180] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2349635 ] 00:29:52.267 [2024-12-05 14:01:23.550583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.267 [2024-12-05 14:01:23.607356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.267 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.267 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:52.267 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:52.267 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:52.525 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:52.526 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.526 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.526 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.526 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:52.526 14:01:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:53.094 nvme0n1 00:29:53.094 14:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:53.094 14:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.094 14:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:53.094 14:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.094 14:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:53.094 14:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:53.094 Running I/O for 2 seconds... 00:29:53.094 [2024-12-05 14:01:24.466452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016eeea00 00:29:53.094 [2024-12-05 14:01:24.467884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.094 [2024-12-05 14:01:24.467939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:53.094 [2024-12-05 14:01:24.477577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016edece0 00:29:53.094 [2024-12-05 14:01:24.478814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.094 [2024-12-05 14:01:24.478845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:53.094 [2024-12-05 14:01:24.489463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016eef270 00:29:53.094 [2024-12-05 14:01:24.490446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.094 [2024-12-05 14:01:24.490477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:53.094 [2024-12-05 14:01:24.501237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016eeee38 00:29:53.094 [2024-12-05 14:01:24.502261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.094 [2024-12-05 14:01:24.502304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:53.094 [2024-12-05 14:01:24.515868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee9168 00:29:53.094 [2024-12-05 14:01:24.517518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.094 [2024-12-05 14:01:24.517561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:53.094 [2024-12-05 14:01:24.524281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee3498 00:29:53.094 [2024-12-05 14:01:24.525123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.094 [2024-12-05 14:01:24.525166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:53.094 [2024-12-05 14:01:24.536271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016eeb760 00:29:53.094 [2024-12-05 14:01:24.537049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.094 [2024-12-05 14:01:24.537101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:53.094 [2024-12-05 14:01:24.550173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016efa7d8 00:29:53.094 [2024-12-05 14:01:24.551179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.094 [2024-12-05 14:01:24.551209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:53.094 [2024-12-05 14:01:24.561137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016eea680 00:29:53.094 [2024-12-05 14:01:24.562038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.094 [2024-12-05 14:01:24.562068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:53.094 [2024-12-05 14:01:24.572986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee0ea0 00:29:53.094 [2024-12-05 14:01:24.574086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.094 [2024-12-05 14:01:24.574128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:53.094 [2024-12-05 14:01:24.584735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee12d8 00:29:53.094 [2024-12-05 14:01:24.586026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.094 [2024-12-05 14:01:24.586070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:53.094 [2024-12-05 14:01:24.596634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016efb480 00:29:53.094 [2024-12-05 14:01:24.597871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.094 [2024-12-05 14:01:24.597913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:53.094 [2024-12-05 14:01:24.607739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016eed4e8 00:29:53.094 [2024-12-05 14:01:24.608953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.094 [2024-12-05 14:01:24.608982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:53.353 [2024-12-05 14:01:24.619734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee12d8 00:29:53.353 [2024-12-05 14:01:24.620957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.353 [2024-12-05 14:01:24.620986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:53.353 [2024-12-05 14:01:24.632944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee2c28 00:29:53.353 [2024-12-05 14:01:24.634356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.353 [2024-12-05 14:01:24.634399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:53.353 [2024-12-05 14:01:24.644079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016edf550 00:29:53.353 [2024-12-05 14:01:24.645402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.645453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.658624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ef31b8 00:29:53.354 [2024-12-05 14:01:24.660484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.660512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.667012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee4140 00:29:53.354 [2024-12-05 14:01:24.668000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.668042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.681392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.354 [2024-12-05 14:01:24.681717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.681747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.695523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.354 [2024-12-05 14:01:24.695774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.695818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.709815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.354 [2024-12-05 14:01:24.710073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.710102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.724119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.354 [2024-12-05 14:01:24.724487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.724517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.738629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.354 [2024-12-05 14:01:24.738939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.738982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.752728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.354 [2024-12-05 14:01:24.753022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.753050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.767091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.354 [2024-12-05 14:01:24.767389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.767423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.781259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.354 [2024-12-05 14:01:24.781544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.781588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.795695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.354 [2024-12-05 14:01:24.796026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.796053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.809881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.354 [2024-12-05 14:01:24.810324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.810351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.823993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.354 [2024-12-05 14:01:24.824288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.824330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.838326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.354 [2024-12-05 14:01:24.838618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.838646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.852581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.354 [2024-12-05 14:01:24.852853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.852895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.354 [2024-12-05 14:01:24.866561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.354 [2024-12-05 14:01:24.866783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.354 [2024-12-05 14:01:24.866812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.621 [2024-12-05 14:01:24.880201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.621 [2024-12-05 14:01:24.880455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.621 [2024-12-05 14:01:24.880496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.621 [2024-12-05 14:01:24.893949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.621 [2024-12-05 14:01:24.894212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.621 [2024-12-05 14:01:24.894257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.621 [2024-12-05 14:01:24.908148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.621 [2024-12-05 14:01:24.908403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.621 [2024-12-05 14:01:24.908438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.621 [2024-12-05 14:01:24.922111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.621 [2024-12-05 14:01:24.922372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.621 [2024-12-05 14:01:24.922415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.621 [2024-12-05 14:01:24.936177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.621 [2024-12-05 14:01:24.936442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.621 [2024-12-05 14:01:24.936471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.621 [2024-12-05 14:01:24.950273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.621 [2024-12-05 14:01:24.950562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.621 [2024-12-05 14:01:24.950605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.621 [2024-12-05 14:01:24.964269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.621 [2024-12-05 14:01:24.964548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.622 [2024-12-05 14:01:24.964577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.622 [2024-12-05 14:01:24.978161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.622 [2024-12-05 14:01:24.978451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.622 [2024-12-05 14:01:24.978478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.622 [2024-12-05 14:01:24.992816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.622 [2024-12-05 14:01:24.993081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.622 [2024-12-05 14:01:24.993109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.622 [2024-12-05 14:01:25.006893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.622 [2024-12-05 14:01:25.007164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.622 [2024-12-05 14:01:25.007206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.622 [2024-12-05 14:01:25.021018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.622 [2024-12-05 14:01:25.021283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.622 [2024-12-05 14:01:25.021326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.622 [2024-12-05 14:01:25.035362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.622 [2024-12-05 14:01:25.035607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.622 [2024-12-05 14:01:25.035636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.622 [2024-12-05 14:01:25.049661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.622 [2024-12-05 14:01:25.049911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.622 [2024-12-05 14:01:25.049939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.622 [2024-12-05 14:01:25.063879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.622 [2024-12-05 14:01:25.064144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.622 [2024-12-05 14:01:25.064172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.622 [2024-12-05 14:01:25.077965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.622 [2024-12-05 14:01:25.078226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.622 [2024-12-05 14:01:25.078254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.622 [2024-12-05 14:01:25.092149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.622 [2024-12-05 14:01:25.092408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.622 [2024-12-05 14:01:25.092467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.622 [2024-12-05 14:01:25.106297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.622 [2024-12-05 14:01:25.106545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.622 [2024-12-05 14:01:25.106573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.622 [2024-12-05 14:01:25.120531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.622 [2024-12-05 14:01:25.120812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.622 [2024-12-05 14:01:25.120838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.622 [2024-12-05 14:01:25.134565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.622 [2024-12-05 14:01:25.134817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.622 [2024-12-05 14:01:25.134845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.887 [2024-12-05 14:01:25.148674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.887 [2024-12-05 14:01:25.148921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.148949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.162618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.162867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.162908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.176651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.176931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.176974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.190691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.190945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.190988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.204951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.205215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.205244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.218986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.219264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.219292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.233156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.233429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.233458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.247678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.247956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.247988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.261634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.261886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.261928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.275600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.275855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.275883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.289697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.289948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.289976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.303819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.304078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.304120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.318017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.318278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.318320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.332318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.332612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.332657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.346636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.346887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.346915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.360688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.360938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.360981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.374621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.374878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.374907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.388500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.388732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.388783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.888 [2024-12-05 14:01:25.402590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:53.888 [2024-12-05 14:01:25.402842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.888 [2024-12-05 14:01:25.402870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.147 [2024-12-05 14:01:25.416739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.147 [2024-12-05 14:01:25.417004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.147 [2024-12-05 14:01:25.417045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.147 [2024-12-05 14:01:25.430769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.147 [2024-12-05 14:01:25.431032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.147 [2024-12-05 14:01:25.431060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.147 [2024-12-05 14:01:25.444844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.147 [2024-12-05 14:01:25.445091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.147 [2024-12-05 14:01:25.445117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.147 18777.00 IOPS, 73.35 MiB/s [2024-12-05T13:01:25.673Z] [2024-12-05 14:01:25.458777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.147 [2024-12-05 14:01:25.459041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.147 [2024-12-05 14:01:25.459086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.147 [2024-12-05 14:01:25.472886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.147 [2024-12-05 14:01:25.473145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.147 [2024-12-05 14:01:25.473171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.147 [2024-12-05 14:01:25.486749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.147 [2024-12-05 14:01:25.487016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.147 [2024-12-05 14:01:25.487044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.147 [2024-12-05 14:01:25.501374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.147 [2024-12-05 14:01:25.501621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.147 [2024-12-05 14:01:25.501649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.147 [2024-12-05 14:01:25.515235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.147 [2024-12-05 14:01:25.515507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.147 [2024-12-05 14:01:25.515548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.147 [2024-12-05 14:01:25.529256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.147 [2024-12-05 14:01:25.529533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.147 [2024-12-05 14:01:25.529561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.147 [2024-12-05 14:01:25.543318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.147 [2024-12-05 14:01:25.543609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.147 [2024-12-05 14:01:25.543653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.147 [2024-12-05 14:01:25.557468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.147 [2024-12-05 14:01:25.557689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.147 [2024-12-05 14:01:25.557717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.147 [2024-12-05 14:01:25.571384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.147 [2024-12-05 14:01:25.571632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.147 [2024-12-05 14:01:25.571659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.147 [2024-12-05 14:01:25.585276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.147 [2024-12-05 14:01:25.585557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.148 [2024-12-05 14:01:25.585586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.148 [2024-12-05 14:01:25.599294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.148 [2024-12-05 14:01:25.599582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.148 [2024-12-05 14:01:25.599611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.148 [2024-12-05 14:01:25.613289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.148 [2024-12-05 14:01:25.613567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.148 [2024-12-05 14:01:25.613601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.148 [2024-12-05 14:01:25.627210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.148 [2024-12-05 14:01:25.627493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.148 [2024-12-05 14:01:25.627522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.148 [2024-12-05 14:01:25.641445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.148 [2024-12-05 14:01:25.641669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.148 [2024-12-05 14:01:25.641697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.148 [2024-12-05 14:01:25.655353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.148 [2024-12-05 14:01:25.655610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.148 [2024-12-05 14:01:25.655637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.148 [2024-12-05 14:01:25.669365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.148 [2024-12-05 14:01:25.669597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.148 [2024-12-05 14:01:25.669625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.683142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.683400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.683452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.697369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.697614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.697652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.711529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.711788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.711830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.725700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.725948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.725991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.739892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.740158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.740207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.754386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.754632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.754660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.768087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.768352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.768395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.782083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.782346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.782387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.796303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.796552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.796581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.810337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.810585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.810614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.824468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.824690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.824718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.838540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.838765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.838808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.852089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.852326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.852354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.865428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.865663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.865691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.407 [2024-12-05 14:01:25.879032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.407 [2024-12-05 14:01:25.879298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.407 [2024-12-05 14:01:25.879326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.408 [2024-12-05 14:01:25.893341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.408 [2024-12-05 14:01:25.893587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.408 [2024-12-05 14:01:25.893615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.408 [2024-12-05 14:01:25.907493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.408 [2024-12-05 14:01:25.907756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.408 [2024-12-05 14:01:25.907799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.408 [2024-12-05 14:01:25.921712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.408 [2024-12-05 14:01:25.921958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.408 [2024-12-05 14:01:25.921986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:25.935759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:25.935997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:25.936024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:25.949888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:25.950153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:25.950196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:25.963949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:25.964214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:25.964256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:25.978243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:25.978517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:25.978545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:25.992444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:25.992668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:25.992696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:26.007080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:26.007347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:26.007390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:26.020978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:26.021267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:26.021297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:26.034989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:26.035269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:26.035316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:26.049167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:26.049460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:26.049506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:26.063240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:26.063521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:26.063566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:26.077535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:26.077781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:26.077810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:26.091977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:26.092205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:26.092235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:26.106085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:26.106338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:26.106391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:26.120312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:26.120559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:26.120588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:26.134496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:26.134737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:26.134764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:26.148556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:26.148793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:26.148820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:26.162745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:26.162982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:26.163009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:26.176999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:26.177293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:26.177320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.667 [2024-12-05 14:01:26.191445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.667 [2024-12-05 14:01:26.191697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.667 [2024-12-05 14:01:26.191725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.205950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.206250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.206277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.220004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.220298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.220340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.234550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.234914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.234958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.249004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.249314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.249357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.263844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.264149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.264192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.278007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.278364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.278392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.292162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.292522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.292551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.306538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.306814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.306855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.321008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.321316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.321356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.335278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.335614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.335643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.349487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.349779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.349823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.363863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.364146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.364176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.377736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.378015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.378043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.391623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.391906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.391934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.405551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.405808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.405835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.419766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.420045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.420089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.434157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.434484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.434513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:54.926 [2024-12-05 14:01:26.448203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:54.926 [2024-12-05 14:01:26.448425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.926 [2024-12-05 14:01:26.448454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:55.184 18433.00 IOPS, 72.00 MiB/s [2024-12-05T13:01:26.710Z] [2024-12-05 14:01:26.461978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d220) with pdu=0x200016ee7c50 00:29:55.184 [2024-12-05 14:01:26.462233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.184 [2024-12-05 14:01:26.462261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:55.184 00:29:55.184 Latency(us) 00:29:55.184 [2024-12-05T13:01:26.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.184 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:55.184 nvme0n1 : 2.01 18434.56 72.01 0.00 0.00 6927.65 2742.80 15340.28 00:29:55.184 [2024-12-05T13:01:26.710Z] =================================================================================================================== 00:29:55.184 [2024-12-05T13:01:26.710Z] Total : 18434.56 72.01 0.00 0.00 6927.65 2742.80 15340.28 00:29:55.184 { 00:29:55.184 "results": [ 00:29:55.184 { 00:29:55.184 "job": "nvme0n1", 00:29:55.184 "core_mask": "0x2", 00:29:55.184 "workload": "randwrite", 00:29:55.184 "status": "finished", 00:29:55.184 "queue_depth": 128, 00:29:55.184 "io_size": 4096, 00:29:55.184 "runtime": 2.006774, 00:29:55.184 "iops": 18434.562138038465, 00:29:55.184 "mibps": 72.01000835171276, 00:29:55.184 "io_failed": 0, 00:29:55.184 "io_timeout": 0, 00:29:55.184 "avg_latency_us": 6927.651287596188, 00:29:55.184 "min_latency_us": 2742.8029629629627, 00:29:55.184 "max_latency_us": 15340.278518518518 00:29:55.184 } 00:29:55.184 ], 00:29:55.184 "core_count": 1 00:29:55.184 } 00:29:55.184 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:55.184 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:55.184 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:55.184 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:55.184 | .driver_specific 00:29:55.184 | .nvme_error 00:29:55.184 | .status_code 00:29:55.184 | .command_transient_transport_error' 00:29:55.443 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:29:55.443 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2349635 00:29:55.443 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2349635 ']' 00:29:55.443 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2349635 00:29:55.443 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:55.443 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:55.443 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2349635 00:29:55.443 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:55.443 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:55.443 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2349635' 00:29:55.443 killing process with pid 2349635 00:29:55.443 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2349635 00:29:55.443 Received shutdown signal, test time was about 2.000000 seconds 00:29:55.443 00:29:55.443 Latency(us) 00:29:55.443 [2024-12-05T13:01:26.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.443 [2024-12-05T13:01:26.969Z] =================================================================================================================== 00:29:55.443 [2024-12-05T13:01:26.969Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:55.443 14:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2349635 00:29:55.701 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:55.701 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:55.701 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:55.701 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:55.701 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:55.701 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2350035 00:29:55.701 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:55.701 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2350035 /var/tmp/bperf.sock 00:29:55.701 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2350035 ']' 00:29:55.701 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:55.701 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.701 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:55.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:55.702 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.702 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:55.702 [2024-12-05 14:01:27.123076] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:29:55.702 [2024-12-05 14:01:27.123161] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2350035 ] 00:29:55.702 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:55.702 Zero copy mechanism will not be used. 00:29:55.702 [2024-12-05 14:01:27.190160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.959 [2024-12-05 14:01:27.243745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.959 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:55.959 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:55.959 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:55.959 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:56.216 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:56.216 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.216 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:56.216 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.216 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:56.216 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:56.474 nvme0n1 00:29:56.474 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:56.474 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.474 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:56.474 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.474 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:56.474 14:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:56.733 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:56.733 Zero copy mechanism will not be used. 00:29:56.733 Running I/O for 2 seconds... 00:29:56.733 [2024-12-05 14:01:28.080094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.733 [2024-12-05 14:01:28.080193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.733 [2024-12-05 14:01:28.080232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.733 [2024-12-05 14:01:28.086078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.733 [2024-12-05 14:01:28.086211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.733 [2024-12-05 14:01:28.086242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.733 [2024-12-05 14:01:28.092631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.733 [2024-12-05 14:01:28.092735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.733 [2024-12-05 14:01:28.092765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.733 [2024-12-05 14:01:28.099675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.733 [2024-12-05 14:01:28.099817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.733 [2024-12-05 14:01:28.099847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.733 [2024-12-05 14:01:28.106365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.733 [2024-12-05 14:01:28.106467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.106497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.112025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.112136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.112166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.117021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.117126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.117155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.121907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.122022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.122051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.127429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.127510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.127537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.133069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.133137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.133165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.138518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.138601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.138630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.143609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.143697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.143726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.148591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.148714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.148743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.154454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.154654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.154682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.160870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.160995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.161024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.168202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.168313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.168342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.173964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.174248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.174278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.178797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.179098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.179127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.183866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.184203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.184232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.189244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.189573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.189603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.194692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.195007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.195036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.200048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.200388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.200423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.205458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.205811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.205840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.210609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.210893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.210922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.215756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.216054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.216082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.220809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.221111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.221149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.225781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.226076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.226105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.232406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.232761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.232790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.238481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.238819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.238848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.244570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.244911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.244940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.250181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.250503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.250532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.734 [2024-12-05 14:01:28.256838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.734 [2024-12-05 14:01:28.257184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.734 [2024-12-05 14:01:28.257213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.262593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.262882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.262911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.267555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.267845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.267874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.272693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.272973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.273001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.277245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.277519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.277548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.281594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.281827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.281855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.285977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.286196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.286226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.290290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.290523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.290552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.294518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.294726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.294754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.298832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.299050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.299078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.303371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.303589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.303618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.308000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.308252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.308281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.312646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.312869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.312898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.317472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.317735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.317764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.322678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.322906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.322934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.326909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.327136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.327165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.331997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.332213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.332241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.337543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.337734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.337762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.343694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.343927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.343955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.349364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.349670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.349699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.354675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.354999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.355033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.359799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.360018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.360046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.365013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.365260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.365289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.370241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.370473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.370502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.375409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.375662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.375690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.380522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.380720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.380748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.385701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.385934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.385962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.994 [2024-12-05 14:01:28.390809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.994 [2024-12-05 14:01:28.391007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.994 [2024-12-05 14:01:28.391036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.395985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.396245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.396274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.401243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.401455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.401489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.406629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.406916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.406944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.411760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.411995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.412024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.417036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.417323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.417352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.421634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.421814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.421842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.425902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.426086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.426115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.430238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.430403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.430445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.434375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.434539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.434568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.438703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.438903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.438932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.443698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.443900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.443929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.448354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.448555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.448584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.452941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.453155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.453185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.457256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.457462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.457499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.461504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.461674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.461703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.466377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.466636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.466665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.471480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.471701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.471729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.476986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.477163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.477191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.482300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.482461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.482495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.487439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.487696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.487725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.492796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.493028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.493057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.497951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.498235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.498264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.503101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.503318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.503347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.508917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.509168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.509197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.995 [2024-12-05 14:01:28.514207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:56.995 [2024-12-05 14:01:28.514516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.995 [2024-12-05 14:01:28.514545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.254 [2024-12-05 14:01:28.519313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.254 [2024-12-05 14:01:28.519594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.254 [2024-12-05 14:01:28.519624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.254 [2024-12-05 14:01:28.524533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.254 [2024-12-05 14:01:28.524825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.254 [2024-12-05 14:01:28.524854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.254 [2024-12-05 14:01:28.529666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.254 [2024-12-05 14:01:28.529898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.529927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.534855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.535100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.535129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.540085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.540312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.540340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.545220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.545536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.545566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.550345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.550605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.550635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.555561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.555801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.555830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.560777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.561016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.561045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.565842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.566109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.566137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.572289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.572541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.572571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.576848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.577101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.577131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.581345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.581548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.581576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.585893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.586108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.586136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.590832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.591081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.591110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.596111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.596389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.596429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.601078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.601343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.601373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.606160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.606431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.606461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.611085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.611384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.611414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.616186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.616467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.616502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.621249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.621554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.621583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.626216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.626504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.626533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.631273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.631573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.631602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.636345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.636657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.636686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.641530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.641743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.641771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.647609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.647823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.647852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.652810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.653063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.653092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.657866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.658037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.658065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.663100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.663242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.663277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.668271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.668455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.668486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.673512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.673655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.673683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.678713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.255 [2024-12-05 14:01:28.678863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.255 [2024-12-05 14:01:28.678892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.255 [2024-12-05 14:01:28.684056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.684198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.684227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.689253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.689427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.689456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.694352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.694489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.694518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.699449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.699588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.699617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.704527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.704663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.704692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.709619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.709749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.709778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.714790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.714951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.714980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.719858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.720026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.720055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.725020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.725228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.725258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.730155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.730313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.730341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.735350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.735518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.735547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.740478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.740611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.740640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.745543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.745688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.745717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.750739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.750887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.750915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.755797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.755964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.755994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.760893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.761029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.761058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.765952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.766112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.766141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.771127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.771319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.771347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.256 [2024-12-05 14:01:28.776212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.256 [2024-12-05 14:01:28.776405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.256 [2024-12-05 14:01:28.776443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.781303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.781501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.781530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.786354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.786556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.786585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.791467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.791649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.791678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.796544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.796721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.796755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.801645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.801798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.801827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.806736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.806893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.806922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.811948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.812096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.812124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.817134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.817305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.817334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.822212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.822391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.822426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.827302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.827459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.827496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.832372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.832552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.832582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.837478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.837613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.837639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.843062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.843240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.843269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.848318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.848463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.848491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.853508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.853678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.853706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.858694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.858889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.858917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.863775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.863956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.863985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.868891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.869024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.869052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.873924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.874112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.874141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.879069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.879205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.879233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.884169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.884279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.884307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.889224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.889356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.889384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.894383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.894550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.894580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.899448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.899634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.899664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.904549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.904731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.904758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.909647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.909804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.909833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.914709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.914879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.914908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.919774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.919972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.920001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.518 [2024-12-05 14:01:28.924966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.518 [2024-12-05 14:01:28.925174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.518 [2024-12-05 14:01:28.925202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:28.930042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:28.930247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:28.930280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:28.935308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:28.935463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:28.935492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:28.940390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:28.940543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:28.940571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:28.945498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:28.945624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:28.945653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:28.950747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:28.950958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:28.950987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:28.955852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:28.956027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:28.956056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:28.960916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:28.961113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:28.961141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:28.966089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:28.966323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:28.966350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:28.971274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:28.971408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:28.971443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:28.976349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:28.976489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:28.976518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:28.981440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:28.981570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:28.981598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:28.986505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:28.986645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:28.986674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:28.991583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:28.991725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:28.991754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:28.996664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:28.996805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:28.996834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:29.001750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:29.001890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:29.001919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:29.006801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:29.006972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:29.007000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:29.011976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:29.012175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:29.012203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:29.017064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:29.017258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:29.017287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:29.022185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:29.022336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:29.022364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:29.027403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:29.027546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:29.027575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:29.032456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:29.032611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:29.032640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.519 [2024-12-05 14:01:29.037650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.519 [2024-12-05 14:01:29.037827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.519 [2024-12-05 14:01:29.037855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.042739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.042918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.042946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.047841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.048008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.048036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.053041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.053203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.053232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.058153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.058291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.058320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.063231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.063396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.063439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.068385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.068572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.068600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.073611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.073764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.073793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.078680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.078859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.078887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.779 5989.00 IOPS, 748.62 MiB/s [2024-12-05T13:01:29.305Z] [2024-12-05 14:01:29.084844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.085022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.085050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.089936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.090100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.090129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.095435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.095561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.095590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.100505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.100651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.100691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.105433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.105558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.105587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.109815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.110009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.110037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.114963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.115102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.115130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.120819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.120980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.121009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.126210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.779 [2024-12-05 14:01:29.126302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.779 [2024-12-05 14:01:29.126334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.779 [2024-12-05 14:01:29.130537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.130620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.130647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.134761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.134855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.134883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.139852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.139962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.139988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.144309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.144386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.144413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.148546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.148628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.148655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.152804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.152890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.152917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.157068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.157137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.157163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.161269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.161351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.161376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.165564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.165644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.165670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.169829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.169928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.169954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.174151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.174239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.174265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.178519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.178603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.178630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.182760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.182843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.182869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.187038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.187116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.187152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.191306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.191376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.191402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.195557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.195634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.195661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.199698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.199783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.199810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.203902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.203992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.204019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.208122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.208230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.208257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.212488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.212567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.212594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.216691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.216769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.216796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.221134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.221237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.221265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.226162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.226345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.226373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.231882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.232050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.232081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.237497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.237653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.237683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.242501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.242631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.242659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.247604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.247824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.780 [2024-12-05 14:01:29.247853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.780 [2024-12-05 14:01:29.252676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.780 [2024-12-05 14:01:29.252822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.781 [2024-12-05 14:01:29.252851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.781 [2024-12-05 14:01:29.257795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.781 [2024-12-05 14:01:29.257976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.781 [2024-12-05 14:01:29.258004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.781 [2024-12-05 14:01:29.262944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.781 [2024-12-05 14:01:29.263088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.781 [2024-12-05 14:01:29.263117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.781 [2024-12-05 14:01:29.268041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.781 [2024-12-05 14:01:29.268165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.781 [2024-12-05 14:01:29.268193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.781 [2024-12-05 14:01:29.273138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.781 [2024-12-05 14:01:29.273380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.781 [2024-12-05 14:01:29.273408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.781 [2024-12-05 14:01:29.278177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.781 [2024-12-05 14:01:29.278336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.781 [2024-12-05 14:01:29.278364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.781 [2024-12-05 14:01:29.283412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.781 [2024-12-05 14:01:29.283560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.781 [2024-12-05 14:01:29.283588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.781 [2024-12-05 14:01:29.288527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.781 [2024-12-05 14:01:29.288786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.781 [2024-12-05 14:01:29.288815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.781 [2024-12-05 14:01:29.293537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.781 [2024-12-05 14:01:29.293750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.781 [2024-12-05 14:01:29.293778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.781 [2024-12-05 14:01:29.298794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:57.781 [2024-12-05 14:01:29.299084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.781 [2024-12-05 14:01:29.299113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.303808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.303964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.303992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.308992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.309166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.309195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.313621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.313729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.313762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.318614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.318811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.318839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.323606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.323763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.323792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.328784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.328953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.328982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.333887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.334044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.334073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.339051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.339239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.339267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.344043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.344217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.344245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.349321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.349537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.349566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.354597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.354739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.354768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.359673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.359830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.359858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.364781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.364956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.364985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.369907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.370101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.370129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.375047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.375185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.375214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.380200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.380389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.380427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.385309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.385481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.385509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.390442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.042 [2024-12-05 14:01:29.390568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.042 [2024-12-05 14:01:29.390595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.042 [2024-12-05 14:01:29.395513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.395655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.395683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.400708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.400858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.400886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.405774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.405922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.405950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.410890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.411015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.411043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.416029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.416207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.416235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.421048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.421187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.421215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.426207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.426376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.426404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.431403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.431580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.431610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.436480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.436672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.436700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.441594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.441752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.441780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.446757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.446951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.446984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.451771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.451915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.451943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.456843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.456993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.457021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.462049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.462185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.462214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.467135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.467274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.467302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.472296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.472485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.472514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.477399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.477563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.477591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.482549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.482648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.482677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.487654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.487831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.487860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.492824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.493049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.493078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.497824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.498014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.498042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.502943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.503093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.503122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.508035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.508173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.508201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.513108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.513252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.513279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.518300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.518461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.518489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.523563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.523673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.523701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.528776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.528876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.043 [2024-12-05 14:01:29.528904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.043 [2024-12-05 14:01:29.533940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.043 [2024-12-05 14:01:29.534082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.044 [2024-12-05 14:01:29.534111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.044 [2024-12-05 14:01:29.539022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.044 [2024-12-05 14:01:29.539162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.044 [2024-12-05 14:01:29.539191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.044 [2024-12-05 14:01:29.544172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.044 [2024-12-05 14:01:29.544370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.044 [2024-12-05 14:01:29.544398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.044 [2024-12-05 14:01:29.549220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.044 [2024-12-05 14:01:29.549333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.044 [2024-12-05 14:01:29.549361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.044 [2024-12-05 14:01:29.554386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.044 [2024-12-05 14:01:29.554530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.044 [2024-12-05 14:01:29.554559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.044 [2024-12-05 14:01:29.559487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.044 [2024-12-05 14:01:29.559616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.044 [2024-12-05 14:01:29.559644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.564522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.564698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.564726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.569620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.569770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.569798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.574669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.574857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.574885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.579737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.579920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.579949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.584808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.585005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.585032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.589891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.590090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.590118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.594914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.595051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.595078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.600603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.600745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.600774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.606293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.606481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.606510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.611250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.611438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.611466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.616223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.616316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.616348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.620840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.621038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.621066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.626055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.626237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.626271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.631580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.631684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.631712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.637163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.637239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.637265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.641506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.641594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.641620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.645900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.646057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.646085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.650366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.346 [2024-12-05 14:01:29.650459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.346 [2024-12-05 14:01:29.650486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.346 [2024-12-05 14:01:29.654856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.654980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.655008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.659111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.659201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.659228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.663396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.663495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.663522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.668161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.668299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.668328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.673236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.673380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.673408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.678742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.678961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.678990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.684050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.684183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.684211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.688297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.688407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.688445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.692618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.692708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.692734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.697030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.697124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.697151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.701456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.701544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.701571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.705810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.705882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.705909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.710236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.710346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.710374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.715055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.715257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.715286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.720159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.720346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.720375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.726274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.726486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.726514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.731069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.731188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.731216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.735385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.735505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.735533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.739790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.739904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.739932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.744134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.744212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.744239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.748478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.748579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.748613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.753558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.753794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.753822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.758715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.758871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.758899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.764641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.764728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.764755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.769687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.769836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.769865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.774832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.775036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.775065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.347 [2024-12-05 14:01:29.780096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.347 [2024-12-05 14:01:29.780256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.347 [2024-12-05 14:01:29.780284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.348 [2024-12-05 14:01:29.785164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.348 [2024-12-05 14:01:29.785321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.348 [2024-12-05 14:01:29.785349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.348 [2024-12-05 14:01:29.790185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.348 [2024-12-05 14:01:29.790282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.348 [2024-12-05 14:01:29.790309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.348 [2024-12-05 14:01:29.795283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.348 [2024-12-05 14:01:29.795482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.348 [2024-12-05 14:01:29.795510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.348 [2024-12-05 14:01:29.800298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.348 [2024-12-05 14:01:29.800454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.348 [2024-12-05 14:01:29.800482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.348 [2024-12-05 14:01:29.805519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.348 [2024-12-05 14:01:29.805672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.348 [2024-12-05 14:01:29.805699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.348 [2024-12-05 14:01:29.810672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.348 [2024-12-05 14:01:29.810859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.348 [2024-12-05 14:01:29.810888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.348 [2024-12-05 14:01:29.815819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.348 [2024-12-05 14:01:29.815933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.348 [2024-12-05 14:01:29.815962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.348 [2024-12-05 14:01:29.820881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.348 [2024-12-05 14:01:29.821016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.348 [2024-12-05 14:01:29.821044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.348 [2024-12-05 14:01:29.825951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.348 [2024-12-05 14:01:29.826103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.348 [2024-12-05 14:01:29.826131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.348 [2024-12-05 14:01:29.831017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.348 [2024-12-05 14:01:29.831174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.348 [2024-12-05 14:01:29.831202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.348 [2024-12-05 14:01:29.836009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.348 [2024-12-05 14:01:29.836137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.348 [2024-12-05 14:01:29.836165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.348 [2024-12-05 14:01:29.841218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.348 [2024-12-05 14:01:29.841458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.348 [2024-12-05 14:01:29.841487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.348 [2024-12-05 14:01:29.846251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.348 [2024-12-05 14:01:29.846434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.348 [2024-12-05 14:01:29.846463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.607 [2024-12-05 14:01:29.851767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.607 [2024-12-05 14:01:29.851888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.607 [2024-12-05 14:01:29.851917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.607 [2024-12-05 14:01:29.858015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.607 [2024-12-05 14:01:29.858163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.607 [2024-12-05 14:01:29.858190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.862609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.862733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.862762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.867042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.867161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.867188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.871392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.871543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.871571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.875867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.875968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.875995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.880262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.880366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.880426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.884721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.884899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.884926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.889873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.889977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.890005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.894947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.895049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.895077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.900860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.901010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.901038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.906104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.906224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.906252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.911297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.911451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.911480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.916487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.916641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.916668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.921539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.921721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.921749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.926615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.926804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.926833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.931841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.931999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.932027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.936926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.937090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.937117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.941994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.942157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.942186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.947228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.947383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.947411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.952317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.952468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.952496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.957400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.957540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.957568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.962483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.962622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.962650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.967704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.967817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.967845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.972811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.972914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.972942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.977944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.978118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.978146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.983159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.983318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.983346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.988273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.988401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.608 [2024-12-05 14:01:29.988438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.608 [2024-12-05 14:01:29.993338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.608 [2024-12-05 14:01:29.993489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:29.993518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:29.998551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:29.998695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:29.998722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.003902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.004176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.004223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.009059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.009228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.009260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.014136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.014312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.014349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.019215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.019397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.019439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.024569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.024760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.024797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.029663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.029817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.029847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.035500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.035636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.035664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.040656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.040777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.040807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.045079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.045243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.045271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.049512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.049634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.049663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.053959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.054079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.054108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.058361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.058468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.058496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.062971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.063063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.063090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.067351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.067445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.067472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.071620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.071765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.071795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.076014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.076115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.076143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.609 [2024-12-05 14:01:30.080263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6d5b0) with pdu=0x200016eff3c8 00:29:58.609 [2024-12-05 14:01:30.080378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.609 [2024-12-05 14:01:30.080407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.609 6128.50 IOPS, 766.06 MiB/s 00:29:58.609 Latency(us) 00:29:58.609 [2024-12-05T13:01:30.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.609 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:58.609 nvme0n1 : 2.00 6124.87 765.61 0.00 0.00 2604.88 1929.67 7233.23 00:29:58.609 [2024-12-05T13:01:30.135Z] =================================================================================================================== 00:29:58.609 [2024-12-05T13:01:30.135Z] Total : 6124.87 765.61 0.00 0.00 2604.88 1929.67 7233.23 00:29:58.609 { 00:29:58.609 "results": [ 00:29:58.609 { 00:29:58.609 "job": "nvme0n1", 00:29:58.609 "core_mask": "0x2", 00:29:58.609 "workload": "randwrite", 00:29:58.609 "status": "finished", 00:29:58.609 "queue_depth": 16, 00:29:58.609 "io_size": 131072, 00:29:58.609 "runtime": 2.003635, 00:29:58.609 "iops": 6124.868052314918, 00:29:58.609 "mibps": 765.6085065393647, 00:29:58.609 "io_failed": 0, 00:29:58.609 "io_timeout": 0, 00:29:58.609 "avg_latency_us": 2604.875221401323, 00:29:58.609 "min_latency_us": 1929.671111111111, 00:29:58.609 "max_latency_us": 7233.2325925925925 00:29:58.609 } 00:29:58.609 ], 00:29:58.609 "core_count": 1 00:29:58.609 } 00:29:58.609 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:58.609 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:58.609 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:58.609 | .driver_specific 00:29:58.609 | .nvme_error 00:29:58.609 | .status_code 00:29:58.609 | .command_transient_transport_error' 00:29:58.609 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:58.869 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 396 > 0 )) 00:29:58.869 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2350035 00:29:58.869 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2350035 ']' 00:29:58.869 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2350035 00:29:58.869 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:58.869 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.869 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2350035 00:29:59.127 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:59.127 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:59.127 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2350035' 00:29:59.127 killing process with pid 2350035 00:29:59.127 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2350035 00:29:59.127 Received shutdown signal, test time was about 2.000000 seconds 00:29:59.127 00:29:59.127 Latency(us) 00:29:59.127 [2024-12-05T13:01:30.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.127 [2024-12-05T13:01:30.653Z] =================================================================================================================== 00:29:59.127 [2024-12-05T13:01:30.653Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:59.127 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2350035 00:29:59.127 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2348673 00:29:59.127 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2348673 ']' 00:29:59.127 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2348673 00:29:59.127 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:59.127 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.127 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2348673 00:29:59.385 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:59.385 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:59.385 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2348673' 00:29:59.385 killing process with pid 2348673 00:29:59.385 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2348673 00:29:59.385 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2348673 00:29:59.385 00:29:59.385 real 0m15.569s 00:29:59.385 user 0m31.106s 00:29:59.385 sys 0m4.531s 00:29:59.385 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.385 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:59.385 ************************************ 00:29:59.385 END TEST nvmf_digest_error 00:29:59.385 ************************************ 00:29:59.645 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:59.645 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:59.645 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:59.645 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:59.645 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:59.645 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:59.645 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:59.645 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:59.645 rmmod nvme_tcp 00:29:59.646 rmmod nvme_fabrics 00:29:59.646 rmmod nvme_keyring 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2348673 ']' 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2348673 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2348673 ']' 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2348673 00:29:59.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2348673) - No such process 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2348673 is not found' 00:29:59.646 Process with pid 2348673 is not found 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.646 14:01:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.554 14:01:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:01.554 00:30:01.554 real 0m36.033s 00:30:01.554 user 1m3.813s 00:30:01.554 sys 0m10.538s 00:30:01.554 14:01:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.554 14:01:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:01.554 ************************************ 00:30:01.554 END TEST nvmf_digest 00:30:01.554 ************************************ 00:30:01.554 14:01:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:01.554 14:01:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:01.554 14:01:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:01.554 14:01:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:01.554 14:01:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:01.554 14:01:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:01.554 14:01:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.554 ************************************ 00:30:01.554 START TEST nvmf_bdevperf 00:30:01.554 ************************************ 00:30:01.554 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:01.813 * Looking for test storage... 00:30:01.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:01.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.813 --rc genhtml_branch_coverage=1 00:30:01.813 --rc genhtml_function_coverage=1 00:30:01.813 --rc genhtml_legend=1 00:30:01.813 --rc geninfo_all_blocks=1 00:30:01.813 --rc geninfo_unexecuted_blocks=1 00:30:01.813 00:30:01.813 ' 00:30:01.813 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:01.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.813 --rc genhtml_branch_coverage=1 00:30:01.813 --rc genhtml_function_coverage=1 00:30:01.813 --rc genhtml_legend=1 00:30:01.813 --rc geninfo_all_blocks=1 00:30:01.814 --rc geninfo_unexecuted_blocks=1 00:30:01.814 00:30:01.814 ' 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:01.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.814 --rc genhtml_branch_coverage=1 00:30:01.814 --rc genhtml_function_coverage=1 00:30:01.814 --rc genhtml_legend=1 00:30:01.814 --rc geninfo_all_blocks=1 00:30:01.814 --rc geninfo_unexecuted_blocks=1 00:30:01.814 00:30:01.814 ' 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:01.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.814 --rc genhtml_branch_coverage=1 00:30:01.814 --rc genhtml_function_coverage=1 00:30:01.814 --rc genhtml_legend=1 00:30:01.814 --rc geninfo_all_blocks=1 00:30:01.814 --rc geninfo_unexecuted_blocks=1 00:30:01.814 00:30:01.814 ' 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:01.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:01.814 14:01:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:04.348 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:04.348 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:04.348 Found net devices under 0000:09:00.0: cvl_0_0 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.348 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:04.349 Found net devices under 0000:09:00.1: cvl_0_1 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:04.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:30:04.349 00:30:04.349 --- 10.0.0.2 ping statistics --- 00:30:04.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.349 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:30:04.349 00:30:04.349 --- 10.0.0.1 ping statistics --- 00:30:04.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.349 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2352519 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2352519 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2352519 ']' 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:04.349 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.349 [2024-12-05 14:01:35.668451] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:30:04.349 [2024-12-05 14:01:35.668533] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.349 [2024-12-05 14:01:35.741381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:04.349 [2024-12-05 14:01:35.797383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.349 [2024-12-05 14:01:35.797455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.349 [2024-12-05 14:01:35.797480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.349 [2024-12-05 14:01:35.797491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.349 [2024-12-05 14:01:35.797501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.349 [2024-12-05 14:01:35.798910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:04.349 [2024-12-05 14:01:35.798952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:04.349 [2024-12-05 14:01:35.798955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.610 [2024-12-05 14:01:35.949592] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.610 Malloc0 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.610 14:01:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.610 [2024-12-05 14:01:36.013953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:04.610 { 00:30:04.610 "params": { 00:30:04.610 "name": "Nvme$subsystem", 00:30:04.610 "trtype": "$TEST_TRANSPORT", 00:30:04.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.610 "adrfam": "ipv4", 00:30:04.610 "trsvcid": "$NVMF_PORT", 00:30:04.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.610 "hdgst": ${hdgst:-false}, 00:30:04.610 "ddgst": ${ddgst:-false} 00:30:04.610 }, 00:30:04.610 "method": "bdev_nvme_attach_controller" 00:30:04.610 } 00:30:04.610 EOF 00:30:04.610 )") 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:04.610 14:01:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:04.610 "params": { 00:30:04.610 "name": "Nvme1", 00:30:04.610 "trtype": "tcp", 00:30:04.610 "traddr": "10.0.0.2", 00:30:04.610 "adrfam": "ipv4", 00:30:04.610 "trsvcid": "4420", 00:30:04.610 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.610 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:04.610 "hdgst": false, 00:30:04.610 "ddgst": false 00:30:04.610 }, 00:30:04.610 "method": "bdev_nvme_attach_controller" 00:30:04.610 }' 00:30:04.610 [2024-12-05 14:01:36.066230] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:30:04.610 [2024-12-05 14:01:36.066298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352545 ] 00:30:04.610 [2024-12-05 14:01:36.134288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.882 [2024-12-05 14:01:36.196025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.141 Running I/O for 1 seconds... 00:30:06.077 8372.00 IOPS, 32.70 MiB/s 00:30:06.077 Latency(us) 00:30:06.077 [2024-12-05T13:01:37.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.077 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:06.077 Verification LBA range: start 0x0 length 0x4000 00:30:06.077 Nvme1n1 : 1.05 8111.99 31.69 0.00 0.00 15117.94 3373.89 45632.47 00:30:06.077 [2024-12-05T13:01:37.603Z] =================================================================================================================== 00:30:06.077 [2024-12-05T13:01:37.603Z] Total : 8111.99 31.69 0.00 0.00 15117.94 3373.89 45632.47 00:30:06.335 14:01:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2352702 00:30:06.335 14:01:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:06.335 14:01:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:06.335 14:01:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:06.335 14:01:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:06.335 14:01:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.335 14:01:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.335 14:01:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.335 { 00:30:06.335 "params": { 00:30:06.335 "name": "Nvme$subsystem", 00:30:06.335 "trtype": "$TEST_TRANSPORT", 00:30:06.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.335 "adrfam": "ipv4", 00:30:06.335 "trsvcid": "$NVMF_PORT", 00:30:06.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.335 "hdgst": ${hdgst:-false}, 00:30:06.335 "ddgst": ${ddgst:-false} 00:30:06.335 }, 00:30:06.335 "method": "bdev_nvme_attach_controller" 00:30:06.335 } 00:30:06.335 EOF 00:30:06.335 )") 00:30:06.335 14:01:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:06.335 14:01:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:06.335 14:01:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:06.335 14:01:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.335 "params": { 00:30:06.335 "name": "Nvme1", 00:30:06.335 "trtype": "tcp", 00:30:06.335 "traddr": "10.0.0.2", 00:30:06.335 "adrfam": "ipv4", 00:30:06.335 "trsvcid": "4420", 00:30:06.335 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.335 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.335 "hdgst": false, 00:30:06.335 "ddgst": false 00:30:06.335 }, 00:30:06.335 "method": "bdev_nvme_attach_controller" 00:30:06.335 }' 00:30:06.335 [2024-12-05 14:01:37.735793] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:30:06.335 [2024-12-05 14:01:37.735892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352702 ] 00:30:06.335 [2024-12-05 14:01:37.807773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.595 [2024-12-05 14:01:37.866114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.855 Running I/O for 15 seconds... 00:30:08.731 8468.00 IOPS, 33.08 MiB/s [2024-12-05T13:01:40.833Z] 8562.50 IOPS, 33.45 MiB/s [2024-12-05T13:01:40.833Z] 14:01:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2352519 00:30:09.307 14:01:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:09.307 [2024-12-05 14:01:40.701002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.307 [2024-12-05 14:01:40.701050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.307 [2024-12-05 14:01:40.701661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.307 [2024-12-05 14:01:40.701677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.701693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.701727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.701743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.701758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.701787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.701802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.701819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.701835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.701848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.701863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.701881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.701895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.701908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.701922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.701936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.701949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.701962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.701976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.701990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.308 [2024-12-05 14:01:40.702943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.308 [2024-12-05 14:01:40.702956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.702971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.702984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.702998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.703977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.309 [2024-12-05 14:01:40.703991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.704005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.309 [2024-12-05 14:01:40.704019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.704033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.309 [2024-12-05 14:01:40.704047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.704062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.309 [2024-12-05 14:01:40.704075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.704090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.309 [2024-12-05 14:01:40.704103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.704117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.309 [2024-12-05 14:01:40.704131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.704145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.309 [2024-12-05 14:01:40.704158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.704172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.309 [2024-12-05 14:01:40.704186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.309 [2024-12-05 14:01:40.704200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.310 [2024-12-05 14:01:40.704787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.704817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.704848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.704876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.704903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.704929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.704956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.704982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.704996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.705009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.705022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.705039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.705053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.705066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.705079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.705092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.705106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.705118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.705131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.705144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.705157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.705170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.705184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.310 [2024-12-05 14:01:40.705197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.705209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b26b0 is same with the state(6) to be set 00:30:09.310 [2024-12-05 14:01:40.705230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:09.310 [2024-12-05 14:01:40.705241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:09.310 [2024-12-05 14:01:40.705252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41032 len:8 PRP1 0x0 PRP2 0x0 00:30:09.310 [2024-12-05 14:01:40.705264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.310 [2024-12-05 14:01:40.708384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.310 [2024-12-05 14:01:40.708705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.310 [2024-12-05 14:01:40.709367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-12-05 14:01:40.709430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.310 [2024-12-05 14:01:40.709468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.310 [2024-12-05 14:01:40.709688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.310 [2024-12-05 14:01:40.709920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.310 [2024-12-05 14:01:40.709939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.310 [2024-12-05 14:01:40.709953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.310 [2024-12-05 14:01:40.709973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.310 [2024-12-05 14:01:40.722272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.310 [2024-12-05 14:01:40.722688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-12-05 14:01:40.722741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.311 [2024-12-05 14:01:40.722758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.311 [2024-12-05 14:01:40.723004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.311 [2024-12-05 14:01:40.723194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.311 [2024-12-05 14:01:40.723214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.311 [2024-12-05 14:01:40.723227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.311 [2024-12-05 14:01:40.723240] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.311 [2024-12-05 14:01:40.735375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.311 [2024-12-05 14:01:40.735794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-12-05 14:01:40.735824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.311 [2024-12-05 14:01:40.735841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.311 [2024-12-05 14:01:40.736083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.311 [2024-12-05 14:01:40.736288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.311 [2024-12-05 14:01:40.736308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.311 [2024-12-05 14:01:40.736321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.311 [2024-12-05 14:01:40.736334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.311 [2024-12-05 14:01:40.748456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.311 [2024-12-05 14:01:40.748864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-12-05 14:01:40.748894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.311 [2024-12-05 14:01:40.748910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.311 [2024-12-05 14:01:40.749150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.311 [2024-12-05 14:01:40.749357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.311 [2024-12-05 14:01:40.749377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.311 [2024-12-05 14:01:40.749391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.311 [2024-12-05 14:01:40.749430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.311 [2024-12-05 14:01:40.761498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.311 [2024-12-05 14:01:40.761913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-12-05 14:01:40.761946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.311 [2024-12-05 14:01:40.761964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.311 [2024-12-05 14:01:40.762211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.311 [2024-12-05 14:01:40.762400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.311 [2024-12-05 14:01:40.762444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.311 [2024-12-05 14:01:40.762461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.311 [2024-12-05 14:01:40.762474] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.311 [2024-12-05 14:01:40.774711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.311 [2024-12-05 14:01:40.775125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-12-05 14:01:40.775153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.311 [2024-12-05 14:01:40.775169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.311 [2024-12-05 14:01:40.775395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.311 [2024-12-05 14:01:40.775654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.311 [2024-12-05 14:01:40.775677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.311 [2024-12-05 14:01:40.775691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.311 [2024-12-05 14:01:40.775719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.311 [2024-12-05 14:01:40.788138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.311 [2024-12-05 14:01:40.788505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-12-05 14:01:40.788544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.311 [2024-12-05 14:01:40.788562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.311 [2024-12-05 14:01:40.788820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.311 [2024-12-05 14:01:40.789015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.311 [2024-12-05 14:01:40.789035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.311 [2024-12-05 14:01:40.789048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.311 [2024-12-05 14:01:40.789062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.311 [2024-12-05 14:01:40.801430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.311 [2024-12-05 14:01:40.801798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-12-05 14:01:40.801827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.311 [2024-12-05 14:01:40.801844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.311 [2024-12-05 14:01:40.802087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.311 [2024-12-05 14:01:40.802297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.311 [2024-12-05 14:01:40.802318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.311 [2024-12-05 14:01:40.802331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.311 [2024-12-05 14:01:40.802345] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.311 [2024-12-05 14:01:40.814711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.311 [2024-12-05 14:01:40.815148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-12-05 14:01:40.815178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.311 [2024-12-05 14:01:40.815195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.311 [2024-12-05 14:01:40.815451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.311 [2024-12-05 14:01:40.815680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.311 [2024-12-05 14:01:40.815718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.311 [2024-12-05 14:01:40.815732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.311 [2024-12-05 14:01:40.815746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.311 [2024-12-05 14:01:40.828234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.311 [2024-12-05 14:01:40.828581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-12-05 14:01:40.828612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.311 [2024-12-05 14:01:40.828629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.311 [2024-12-05 14:01:40.828862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.311 [2024-12-05 14:01:40.829097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.311 [2024-12-05 14:01:40.829119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.311 [2024-12-05 14:01:40.829133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.311 [2024-12-05 14:01:40.829147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.571 [2024-12-05 14:01:40.841765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.571 [2024-12-05 14:01:40.842185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.571 [2024-12-05 14:01:40.842214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.571 [2024-12-05 14:01:40.842232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.572 [2024-12-05 14:01:40.842488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.572 [2024-12-05 14:01:40.842733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.572 [2024-12-05 14:01:40.842774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.572 [2024-12-05 14:01:40.842789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.572 [2024-12-05 14:01:40.842804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.572 [2024-12-05 14:01:40.854970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.572 [2024-12-05 14:01:40.855352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.572 [2024-12-05 14:01:40.855381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.572 [2024-12-05 14:01:40.855398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.572 [2024-12-05 14:01:40.855655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.572 [2024-12-05 14:01:40.855886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.572 [2024-12-05 14:01:40.855907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.572 [2024-12-05 14:01:40.855920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.572 [2024-12-05 14:01:40.855933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.572 [2024-12-05 14:01:40.868313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.572 [2024-12-05 14:01:40.868662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.572 [2024-12-05 14:01:40.868691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.572 [2024-12-05 14:01:40.868708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.572 [2024-12-05 14:01:40.868933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.572 [2024-12-05 14:01:40.869144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.572 [2024-12-05 14:01:40.869164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.572 [2024-12-05 14:01:40.869178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.572 [2024-12-05 14:01:40.869191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.572 [2024-12-05 14:01:40.881503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.572 [2024-12-05 14:01:40.881904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.572 [2024-12-05 14:01:40.881935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.572 [2024-12-05 14:01:40.881953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.572 [2024-12-05 14:01:40.882197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.572 [2024-12-05 14:01:40.882392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.572 [2024-12-05 14:01:40.882413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.572 [2024-12-05 14:01:40.882456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.572 [2024-12-05 14:01:40.882475] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.572 [2024-12-05 14:01:40.894831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.572 [2024-12-05 14:01:40.895184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.572 [2024-12-05 14:01:40.895213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.572 [2024-12-05 14:01:40.895230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.572 [2024-12-05 14:01:40.895466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.572 [2024-12-05 14:01:40.895673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.572 [2024-12-05 14:01:40.895695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.572 [2024-12-05 14:01:40.895709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.572 [2024-12-05 14:01:40.895739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.572 [2024-12-05 14:01:40.908025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.572 [2024-12-05 14:01:40.908347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.572 [2024-12-05 14:01:40.908376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.572 [2024-12-05 14:01:40.908393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.572 [2024-12-05 14:01:40.908664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.572 [2024-12-05 14:01:40.908899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.572 [2024-12-05 14:01:40.908920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.572 [2024-12-05 14:01:40.908933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.572 [2024-12-05 14:01:40.908946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.572 [2024-12-05 14:01:40.921195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.572 [2024-12-05 14:01:40.921610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.572 [2024-12-05 14:01:40.921640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.572 [2024-12-05 14:01:40.921657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.572 [2024-12-05 14:01:40.921913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.572 [2024-12-05 14:01:40.922124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.572 [2024-12-05 14:01:40.922144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.572 [2024-12-05 14:01:40.922159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.572 [2024-12-05 14:01:40.922172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.572 [2024-12-05 14:01:40.934549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.572 [2024-12-05 14:01:40.934922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.572 [2024-12-05 14:01:40.934956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.572 [2024-12-05 14:01:40.934973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.572 [2024-12-05 14:01:40.935212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.572 [2024-12-05 14:01:40.935447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.572 [2024-12-05 14:01:40.935469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.572 [2024-12-05 14:01:40.935483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.572 [2024-12-05 14:01:40.935511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.572 [2024-12-05 14:01:40.947775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.572 [2024-12-05 14:01:40.948125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.572 [2024-12-05 14:01:40.948154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.572 [2024-12-05 14:01:40.948171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.572 [2024-12-05 14:01:40.948409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.572 [2024-12-05 14:01:40.948672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.572 [2024-12-05 14:01:40.948695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.572 [2024-12-05 14:01:40.948710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.572 [2024-12-05 14:01:40.948724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.572 [2024-12-05 14:01:40.960998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.572 [2024-12-05 14:01:40.961309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.572 [2024-12-05 14:01:40.961362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.572 [2024-12-05 14:01:40.961386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.572 [2024-12-05 14:01:40.961698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.572 [2024-12-05 14:01:40.961981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.572 [2024-12-05 14:01:40.962010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.572 [2024-12-05 14:01:40.962031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.572 [2024-12-05 14:01:40.962053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.572 [2024-12-05 14:01:40.975575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.572 [2024-12-05 14:01:40.976309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.573 [2024-12-05 14:01:40.976364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.573 [2024-12-05 14:01:40.976392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.573 [2024-12-05 14:01:40.976729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.573 [2024-12-05 14:01:40.977029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.573 [2024-12-05 14:01:40.977073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.573 [2024-12-05 14:01:40.977094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.573 [2024-12-05 14:01:40.977115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.573 [2024-12-05 14:01:40.990122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.573 [2024-12-05 14:01:40.990513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.573 [2024-12-05 14:01:40.990547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.573 [2024-12-05 14:01:40.990566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.573 [2024-12-05 14:01:40.990861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.573 [2024-12-05 14:01:40.991141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.573 [2024-12-05 14:01:40.991172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.573 [2024-12-05 14:01:40.991193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.573 [2024-12-05 14:01:40.991229] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.573 [2024-12-05 14:01:41.003449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.573 [2024-12-05 14:01:41.003783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.573 [2024-12-05 14:01:41.003813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.573 [2024-12-05 14:01:41.003831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.573 [2024-12-05 14:01:41.004056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.573 [2024-12-05 14:01:41.004267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.573 [2024-12-05 14:01:41.004286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.573 [2024-12-05 14:01:41.004299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.573 [2024-12-05 14:01:41.004313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.573 [2024-12-05 14:01:41.016776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.573 [2024-12-05 14:01:41.017194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.573 [2024-12-05 14:01:41.017223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.573 [2024-12-05 14:01:41.017239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.573 [2024-12-05 14:01:41.017494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.573 [2024-12-05 14:01:41.017732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.573 [2024-12-05 14:01:41.017759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.573 [2024-12-05 14:01:41.017773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.573 [2024-12-05 14:01:41.017787] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.573 [2024-12-05 14:01:41.030128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.573 [2024-12-05 14:01:41.030565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.573 [2024-12-05 14:01:41.030596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.573 [2024-12-05 14:01:41.030615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.573 [2024-12-05 14:01:41.030858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.573 [2024-12-05 14:01:41.031069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.573 [2024-12-05 14:01:41.031090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.573 [2024-12-05 14:01:41.031104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.573 [2024-12-05 14:01:41.031117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.573 [2024-12-05 14:01:41.043440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.573 [2024-12-05 14:01:41.043769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.573 [2024-12-05 14:01:41.043798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.573 [2024-12-05 14:01:41.043815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.573 [2024-12-05 14:01:41.044040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.573 [2024-12-05 14:01:41.044250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.573 [2024-12-05 14:01:41.044271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.573 [2024-12-05 14:01:41.044284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.573 [2024-12-05 14:01:41.044297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.573 [2024-12-05 14:01:41.056631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.573 [2024-12-05 14:01:41.057001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.573 [2024-12-05 14:01:41.057030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.573 [2024-12-05 14:01:41.057047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.573 [2024-12-05 14:01:41.057294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.573 [2024-12-05 14:01:41.057549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.573 [2024-12-05 14:01:41.057572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.573 [2024-12-05 14:01:41.057587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.573 [2024-12-05 14:01:41.057606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.573 [2024-12-05 14:01:41.069944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.573 [2024-12-05 14:01:41.070360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.573 [2024-12-05 14:01:41.070390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.573 [2024-12-05 14:01:41.070408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.573 [2024-12-05 14:01:41.070668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.573 [2024-12-05 14:01:41.070881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.573 [2024-12-05 14:01:41.070901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.573 [2024-12-05 14:01:41.070915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.573 [2024-12-05 14:01:41.070928] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.573 [2024-12-05 14:01:41.083149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.573 [2024-12-05 14:01:41.083527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.573 [2024-12-05 14:01:41.083557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.573 [2024-12-05 14:01:41.083575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.573 [2024-12-05 14:01:41.083807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.573 [2024-12-05 14:01:41.084018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.573 [2024-12-05 14:01:41.084038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.573 [2024-12-05 14:01:41.084051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.573 [2024-12-05 14:01:41.084064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.833 [2024-12-05 14:01:41.096817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.833 [2024-12-05 14:01:41.097250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-12-05 14:01:41.097280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.833 [2024-12-05 14:01:41.097297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.833 [2024-12-05 14:01:41.097569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.834 [2024-12-05 14:01:41.097813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.834 [2024-12-05 14:01:41.097849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.834 [2024-12-05 14:01:41.097864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.834 [2024-12-05 14:01:41.097877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.834 [2024-12-05 14:01:41.110178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.834 [2024-12-05 14:01:41.110499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-12-05 14:01:41.110537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.834 [2024-12-05 14:01:41.110556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.834 [2024-12-05 14:01:41.110779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.834 [2024-12-05 14:01:41.110991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.834 [2024-12-05 14:01:41.111010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.834 [2024-12-05 14:01:41.111023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.834 [2024-12-05 14:01:41.111037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.834 [2024-12-05 14:01:41.123389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.834 [2024-12-05 14:01:41.123740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-12-05 14:01:41.123769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.834 [2024-12-05 14:01:41.123786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.834 [2024-12-05 14:01:41.124010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.834 [2024-12-05 14:01:41.124220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.834 [2024-12-05 14:01:41.124240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.834 [2024-12-05 14:01:41.124253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.834 [2024-12-05 14:01:41.124266] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.834 [2024-12-05 14:01:41.136614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.834 [2024-12-05 14:01:41.136951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-12-05 14:01:41.136980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.834 [2024-12-05 14:01:41.136997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.834 [2024-12-05 14:01:41.137222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.834 [2024-12-05 14:01:41.137459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.834 [2024-12-05 14:01:41.137480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.834 [2024-12-05 14:01:41.137493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.834 [2024-12-05 14:01:41.137506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.834 [2024-12-05 14:01:41.149875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.834 [2024-12-05 14:01:41.150289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-12-05 14:01:41.150318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.834 [2024-12-05 14:01:41.150335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.834 [2024-12-05 14:01:41.150594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.834 [2024-12-05 14:01:41.150809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.834 [2024-12-05 14:01:41.150830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.834 [2024-12-05 14:01:41.150843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.834 [2024-12-05 14:01:41.150855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.834 [2024-12-05 14:01:41.163182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.834 [2024-12-05 14:01:41.163572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-12-05 14:01:41.163601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.834 [2024-12-05 14:01:41.163619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.834 [2024-12-05 14:01:41.163862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.834 [2024-12-05 14:01:41.164072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.834 [2024-12-05 14:01:41.164091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.834 [2024-12-05 14:01:41.164105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.834 [2024-12-05 14:01:41.164118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.834 [2024-12-05 14:01:41.176511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.834 [2024-12-05 14:01:41.176849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-12-05 14:01:41.176876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.834 [2024-12-05 14:01:41.176893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.834 [2024-12-05 14:01:41.177113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.834 [2024-12-05 14:01:41.177325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.834 [2024-12-05 14:01:41.177345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.834 [2024-12-05 14:01:41.177358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.834 [2024-12-05 14:01:41.177371] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.834 7163.33 IOPS, 27.98 MiB/s [2024-12-05T13:01:41.360Z] [2024-12-05 14:01:41.191305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.834 [2024-12-05 14:01:41.191686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-12-05 14:01:41.191715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.834 [2024-12-05 14:01:41.191747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.835 [2024-12-05 14:01:41.191977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.835 [2024-12-05 14:01:41.192173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.835 [2024-12-05 14:01:41.192197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.835 [2024-12-05 14:01:41.192211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.835 [2024-12-05 14:01:41.192224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.835 [2024-12-05 14:01:41.204697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.835 [2024-12-05 14:01:41.205061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-12-05 14:01:41.205092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.835 [2024-12-05 14:01:41.205109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.835 [2024-12-05 14:01:41.205336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.835 [2024-12-05 14:01:41.205576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.835 [2024-12-05 14:01:41.205597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.835 [2024-12-05 14:01:41.205611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.835 [2024-12-05 14:01:41.205625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.835 [2024-12-05 14:01:41.217954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.835 [2024-12-05 14:01:41.218395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-12-05 14:01:41.218443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.835 [2024-12-05 14:01:41.218472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.835 [2024-12-05 14:01:41.218771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.835 [2024-12-05 14:01:41.219016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.835 [2024-12-05 14:01:41.219044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.835 [2024-12-05 14:01:41.219065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.835 [2024-12-05 14:01:41.219086] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.835 [2024-12-05 14:01:41.231590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.835 [2024-12-05 14:01:41.232052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-12-05 14:01:41.232083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.835 [2024-12-05 14:01:41.232100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.835 [2024-12-05 14:01:41.232337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.835 [2024-12-05 14:01:41.232590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.835 [2024-12-05 14:01:41.232612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.835 [2024-12-05 14:01:41.232626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.835 [2024-12-05 14:01:41.232645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.835 [2024-12-05 14:01:41.245077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.835 [2024-12-05 14:01:41.245511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-12-05 14:01:41.245542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.835 [2024-12-05 14:01:41.245560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.835 [2024-12-05 14:01:41.245811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.835 [2024-12-05 14:01:41.246021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.835 [2024-12-05 14:01:41.246041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.835 [2024-12-05 14:01:41.246055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.835 [2024-12-05 14:01:41.246067] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.835 [2024-12-05 14:01:41.258408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.835 [2024-12-05 14:01:41.258776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-12-05 14:01:41.258807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.835 [2024-12-05 14:01:41.258824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.835 [2024-12-05 14:01:41.259066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.835 [2024-12-05 14:01:41.259278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.835 [2024-12-05 14:01:41.259299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.835 [2024-12-05 14:01:41.259312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.835 [2024-12-05 14:01:41.259325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.835 [2024-12-05 14:01:41.271636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.835 [2024-12-05 14:01:41.272005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-12-05 14:01:41.272034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.835 [2024-12-05 14:01:41.272051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.835 [2024-12-05 14:01:41.272289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.835 [2024-12-05 14:01:41.272549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.835 [2024-12-05 14:01:41.272572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.835 [2024-12-05 14:01:41.272587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.835 [2024-12-05 14:01:41.272602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.835 [2024-12-05 14:01:41.284975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.835 [2024-12-05 14:01:41.285288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-12-05 14:01:41.285316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.835 [2024-12-05 14:01:41.285332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.836 [2024-12-05 14:01:41.285580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.836 [2024-12-05 14:01:41.285811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.836 [2024-12-05 14:01:41.285832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.836 [2024-12-05 14:01:41.285846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.836 [2024-12-05 14:01:41.285859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.836 [2024-12-05 14:01:41.298335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.836 [2024-12-05 14:01:41.298709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-12-05 14:01:41.298753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.836 [2024-12-05 14:01:41.298769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.836 [2024-12-05 14:01:41.299000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.836 [2024-12-05 14:01:41.299195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.836 [2024-12-05 14:01:41.299214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.836 [2024-12-05 14:01:41.299228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.836 [2024-12-05 14:01:41.299241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.836 [2024-12-05 14:01:41.311586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.836 [2024-12-05 14:01:41.311973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-12-05 14:01:41.312002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.836 [2024-12-05 14:01:41.312019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.836 [2024-12-05 14:01:41.312256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.836 [2024-12-05 14:01:41.312495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.836 [2024-12-05 14:01:41.312517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.836 [2024-12-05 14:01:41.312530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.836 [2024-12-05 14:01:41.312543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.836 [2024-12-05 14:01:41.324831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.836 [2024-12-05 14:01:41.325134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-12-05 14:01:41.325162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.836 [2024-12-05 14:01:41.325178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.836 [2024-12-05 14:01:41.325403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.836 [2024-12-05 14:01:41.325615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.836 [2024-12-05 14:01:41.325635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.836 [2024-12-05 14:01:41.325648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.836 [2024-12-05 14:01:41.325661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.836 [2024-12-05 14:01:41.338183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.836 [2024-12-05 14:01:41.338502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-12-05 14:01:41.338532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.836 [2024-12-05 14:01:41.338549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.836 [2024-12-05 14:01:41.338774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.836 [2024-12-05 14:01:41.338986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.836 [2024-12-05 14:01:41.339006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.836 [2024-12-05 14:01:41.339020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.836 [2024-12-05 14:01:41.339033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.836 [2024-12-05 14:01:41.351467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.836 [2024-12-05 14:01:41.351861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-12-05 14:01:41.351891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:09.836 [2024-12-05 14:01:41.351908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:09.836 [2024-12-05 14:01:41.352132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:09.836 [2024-12-05 14:01:41.352342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.836 [2024-12-05 14:01:41.352362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.836 [2024-12-05 14:01:41.352376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.836 [2024-12-05 14:01:41.352390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.096 [2024-12-05 14:01:41.365036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.096 [2024-12-05 14:01:41.365464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.096 [2024-12-05 14:01:41.365494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.096 [2024-12-05 14:01:41.365511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.096 [2024-12-05 14:01:41.365757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.096 [2024-12-05 14:01:41.365969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.096 [2024-12-05 14:01:41.365994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.096 [2024-12-05 14:01:41.366007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.096 [2024-12-05 14:01:41.366020] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.096 [2024-12-05 14:01:41.378456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.096 [2024-12-05 14:01:41.378886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.096 [2024-12-05 14:01:41.378915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.096 [2024-12-05 14:01:41.378933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.096 [2024-12-05 14:01:41.379186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.096 [2024-12-05 14:01:41.379381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.096 [2024-12-05 14:01:41.379425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.096 [2024-12-05 14:01:41.379441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.096 [2024-12-05 14:01:41.379455] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.096 [2024-12-05 14:01:41.391791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.096 [2024-12-05 14:01:41.392174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.096 [2024-12-05 14:01:41.392202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.096 [2024-12-05 14:01:41.392219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.096 [2024-12-05 14:01:41.392468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.096 [2024-12-05 14:01:41.392684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.096 [2024-12-05 14:01:41.392706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.096 [2024-12-05 14:01:41.392736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.096 [2024-12-05 14:01:41.392750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.096 [2024-12-05 14:01:41.405100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.096 [2024-12-05 14:01:41.405488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.096 [2024-12-05 14:01:41.405518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.096 [2024-12-05 14:01:41.405535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.096 [2024-12-05 14:01:41.405779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.096 [2024-12-05 14:01:41.405976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.096 [2024-12-05 14:01:41.405997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.096 [2024-12-05 14:01:41.406010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.096 [2024-12-05 14:01:41.406029] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.096 [2024-12-05 14:01:41.418359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.096 [2024-12-05 14:01:41.418705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.096 [2024-12-05 14:01:41.418736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.096 [2024-12-05 14:01:41.418753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.096 [2024-12-05 14:01:41.419005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.096 [2024-12-05 14:01:41.419200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.096 [2024-12-05 14:01:41.419221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.096 [2024-12-05 14:01:41.419233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.096 [2024-12-05 14:01:41.419246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.096 [2024-12-05 14:01:41.431637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.096 [2024-12-05 14:01:41.432071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.096 [2024-12-05 14:01:41.432100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.096 [2024-12-05 14:01:41.432118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.096 [2024-12-05 14:01:41.432362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.096 [2024-12-05 14:01:41.432606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.096 [2024-12-05 14:01:41.432627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.096 [2024-12-05 14:01:41.432641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.096 [2024-12-05 14:01:41.432654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.096 [2024-12-05 14:01:41.444932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.096 [2024-12-05 14:01:41.445267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.096 [2024-12-05 14:01:41.445297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.096 [2024-12-05 14:01:41.445314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.096 [2024-12-05 14:01:41.445586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.096 [2024-12-05 14:01:41.445801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.096 [2024-12-05 14:01:41.445823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.096 [2024-12-05 14:01:41.445837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.096 [2024-12-05 14:01:41.445850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.096 [2024-12-05 14:01:41.458139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.097 [2024-12-05 14:01:41.458566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.097 [2024-12-05 14:01:41.458600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.097 [2024-12-05 14:01:41.458619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.097 [2024-12-05 14:01:41.458863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.097 [2024-12-05 14:01:41.459075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.097 [2024-12-05 14:01:41.459095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.097 [2024-12-05 14:01:41.459109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.097 [2024-12-05 14:01:41.459122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.097 [2024-12-05 14:01:41.471377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.097 [2024-12-05 14:01:41.471779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.097 [2024-12-05 14:01:41.471819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.097 [2024-12-05 14:01:41.471845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.097 [2024-12-05 14:01:41.472138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.097 [2024-12-05 14:01:41.472384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.097 [2024-12-05 14:01:41.472435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.097 [2024-12-05 14:01:41.472457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.097 [2024-12-05 14:01:41.472479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.097 [2024-12-05 14:01:41.485367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.097 [2024-12-05 14:01:41.485781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.097 [2024-12-05 14:01:41.485813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.097 [2024-12-05 14:01:41.485832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.097 [2024-12-05 14:01:41.486080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.097 [2024-12-05 14:01:41.486276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.097 [2024-12-05 14:01:41.486297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.097 [2024-12-05 14:01:41.486311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.097 [2024-12-05 14:01:41.486325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.097 [2024-12-05 14:01:41.498784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.097 [2024-12-05 14:01:41.499157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.097 [2024-12-05 14:01:41.499186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.097 [2024-12-05 14:01:41.499202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.097 [2024-12-05 14:01:41.499437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.097 [2024-12-05 14:01:41.499659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.097 [2024-12-05 14:01:41.499681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.097 [2024-12-05 14:01:41.499695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.097 [2024-12-05 14:01:41.499709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.097 [2024-12-05 14:01:41.512137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.097 [2024-12-05 14:01:41.512491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.097 [2024-12-05 14:01:41.512522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.097 [2024-12-05 14:01:41.512539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.097 [2024-12-05 14:01:41.512783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.097 [2024-12-05 14:01:41.512992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.097 [2024-12-05 14:01:41.513013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.097 [2024-12-05 14:01:41.513026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.097 [2024-12-05 14:01:41.513039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.097 [2024-12-05 14:01:41.525432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.097 [2024-12-05 14:01:41.525793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.097 [2024-12-05 14:01:41.525822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.097 [2024-12-05 14:01:41.525840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.097 [2024-12-05 14:01:41.526082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.097 [2024-12-05 14:01:41.526297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.097 [2024-12-05 14:01:41.526318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.097 [2024-12-05 14:01:41.526332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.097 [2024-12-05 14:01:41.526346] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.097 [2024-12-05 14:01:41.538634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.097 [2024-12-05 14:01:41.539009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.097 [2024-12-05 14:01:41.539040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.097 [2024-12-05 14:01:41.539057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.097 [2024-12-05 14:01:41.539299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.097 [2024-12-05 14:01:41.539538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.097 [2024-12-05 14:01:41.539566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.097 [2024-12-05 14:01:41.539581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.097 [2024-12-05 14:01:41.539595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.097 [2024-12-05 14:01:41.551900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.097 [2024-12-05 14:01:41.552300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.097 [2024-12-05 14:01:41.552329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.097 [2024-12-05 14:01:41.552346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.097 [2024-12-05 14:01:41.552616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.097 [2024-12-05 14:01:41.552833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.097 [2024-12-05 14:01:41.552854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.097 [2024-12-05 14:01:41.552867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.097 [2024-12-05 14:01:41.552880] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.097 [2024-12-05 14:01:41.565150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.097 [2024-12-05 14:01:41.565563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.097 [2024-12-05 14:01:41.565594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.097 [2024-12-05 14:01:41.565611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.097 [2024-12-05 14:01:41.565857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.097 [2024-12-05 14:01:41.566067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.097 [2024-12-05 14:01:41.566087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.097 [2024-12-05 14:01:41.566101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.097 [2024-12-05 14:01:41.566114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.097 [2024-12-05 14:01:41.578454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.097 [2024-12-05 14:01:41.578782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.097 [2024-12-05 14:01:41.578810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.097 [2024-12-05 14:01:41.578826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.097 [2024-12-05 14:01:41.579060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.097 [2024-12-05 14:01:41.579272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.097 [2024-12-05 14:01:41.579294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.097 [2024-12-05 14:01:41.579307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.097 [2024-12-05 14:01:41.579326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.098 [2024-12-05 14:01:41.591671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.098 [2024-12-05 14:01:41.592040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.098 [2024-12-05 14:01:41.592070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.098 [2024-12-05 14:01:41.592087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.098 [2024-12-05 14:01:41.592331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.098 [2024-12-05 14:01:41.592588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.098 [2024-12-05 14:01:41.592611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.098 [2024-12-05 14:01:41.592625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.098 [2024-12-05 14:01:41.592638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.098 [2024-12-05 14:01:41.605022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.098 [2024-12-05 14:01:41.605375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.098 [2024-12-05 14:01:41.605405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.098 [2024-12-05 14:01:41.605431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.098 [2024-12-05 14:01:41.605686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.098 [2024-12-05 14:01:41.605898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.098 [2024-12-05 14:01:41.605917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.098 [2024-12-05 14:01:41.605930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.098 [2024-12-05 14:01:41.605943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.098 [2024-12-05 14:01:41.618527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.098 [2024-12-05 14:01:41.618950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.098 [2024-12-05 14:01:41.618979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.098 [2024-12-05 14:01:41.618996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.098 [2024-12-05 14:01:41.619246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.098 [2024-12-05 14:01:41.619468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.098 [2024-12-05 14:01:41.619492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.098 [2024-12-05 14:01:41.619506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.098 [2024-12-05 14:01:41.619518] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.358 [2024-12-05 14:01:41.631945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.358 [2024-12-05 14:01:41.632363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.358 [2024-12-05 14:01:41.632397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.358 [2024-12-05 14:01:41.632414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.358 [2024-12-05 14:01:41.632646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.358 [2024-12-05 14:01:41.632878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.358 [2024-12-05 14:01:41.632899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.358 [2024-12-05 14:01:41.632913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.358 [2024-12-05 14:01:41.632926] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.358 [2024-12-05 14:01:41.645263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.358 [2024-12-05 14:01:41.645621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.358 [2024-12-05 14:01:41.645651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.358 [2024-12-05 14:01:41.645668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.358 [2024-12-05 14:01:41.645924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.358 [2024-12-05 14:01:41.646128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.358 [2024-12-05 14:01:41.646149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.358 [2024-12-05 14:01:41.646161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.358 [2024-12-05 14:01:41.646174] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.358 [2024-12-05 14:01:41.658709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.358 [2024-12-05 14:01:41.659132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.358 [2024-12-05 14:01:41.659161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.358 [2024-12-05 14:01:41.659178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.358 [2024-12-05 14:01:41.659425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.358 [2024-12-05 14:01:41.659641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.358 [2024-12-05 14:01:41.659661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.358 [2024-12-05 14:01:41.659674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.358 [2024-12-05 14:01:41.659688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.358 [2024-12-05 14:01:41.671865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.358 [2024-12-05 14:01:41.672174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.358 [2024-12-05 14:01:41.672203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.358 [2024-12-05 14:01:41.672219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.358 [2024-12-05 14:01:41.672455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.358 [2024-12-05 14:01:41.672687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.358 [2024-12-05 14:01:41.672709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.358 [2024-12-05 14:01:41.672722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.358 [2024-12-05 14:01:41.672735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.358 [2024-12-05 14:01:41.685107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.358 [2024-12-05 14:01:41.685520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.358 [2024-12-05 14:01:41.685550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.358 [2024-12-05 14:01:41.685568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.358 [2024-12-05 14:01:41.685806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.358 [2024-12-05 14:01:41.686014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.358 [2024-12-05 14:01:41.686034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.358 [2024-12-05 14:01:41.686047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.358 [2024-12-05 14:01:41.686060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.358 [2024-12-05 14:01:41.698236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.358 [2024-12-05 14:01:41.698592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.358 [2024-12-05 14:01:41.698621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.358 [2024-12-05 14:01:41.698638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.358 [2024-12-05 14:01:41.698876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.358 [2024-12-05 14:01:41.699080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.358 [2024-12-05 14:01:41.699100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.358 [2024-12-05 14:01:41.699113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.358 [2024-12-05 14:01:41.699126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.358 [2024-12-05 14:01:41.711331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.358 [2024-12-05 14:01:41.711682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.358 [2024-12-05 14:01:41.711711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.358 [2024-12-05 14:01:41.711729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.358 [2024-12-05 14:01:41.711966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.358 [2024-12-05 14:01:41.712171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.358 [2024-12-05 14:01:41.712194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.358 [2024-12-05 14:01:41.712209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.358 [2024-12-05 14:01:41.712221] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.358 [2024-12-05 14:01:41.724464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.358 [2024-12-05 14:01:41.724868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.358 [2024-12-05 14:01:41.724907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.358 [2024-12-05 14:01:41.724934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.358 [2024-12-05 14:01:41.725230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.358 [2024-12-05 14:01:41.725819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.358 [2024-12-05 14:01:41.725861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.358 [2024-12-05 14:01:41.725881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.358 [2024-12-05 14:01:41.725902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.358 [2024-12-05 14:01:41.738710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.358 [2024-12-05 14:01:41.739082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.358 [2024-12-05 14:01:41.739114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.359 [2024-12-05 14:01:41.739132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.359 [2024-12-05 14:01:41.739371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.359 [2024-12-05 14:01:41.739598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.359 [2024-12-05 14:01:41.739621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.359 [2024-12-05 14:01:41.739635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.359 [2024-12-05 14:01:41.739649] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.359 [2024-12-05 14:01:41.751900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.359 [2024-12-05 14:01:41.752246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.359 [2024-12-05 14:01:41.752277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.359 [2024-12-05 14:01:41.752294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.359 [2024-12-05 14:01:41.752565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.359 [2024-12-05 14:01:41.752779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.359 [2024-12-05 14:01:41.752799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.359 [2024-12-05 14:01:41.752812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.359 [2024-12-05 14:01:41.752832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.359 [2024-12-05 14:01:41.765154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.359 [2024-12-05 14:01:41.765509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.359 [2024-12-05 14:01:41.765539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.359 [2024-12-05 14:01:41.765556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.359 [2024-12-05 14:01:41.765793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.359 [2024-12-05 14:01:41.765998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.359 [2024-12-05 14:01:41.766019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.359 [2024-12-05 14:01:41.766033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.359 [2024-12-05 14:01:41.766045] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.359 [2024-12-05 14:01:41.778283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.359 [2024-12-05 14:01:41.778700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.359 [2024-12-05 14:01:41.778729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.359 [2024-12-05 14:01:41.778746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.359 [2024-12-05 14:01:41.778983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.359 [2024-12-05 14:01:41.779172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.359 [2024-12-05 14:01:41.779192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.359 [2024-12-05 14:01:41.779206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.359 [2024-12-05 14:01:41.779218] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.359 [2024-12-05 14:01:41.791443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.359 [2024-12-05 14:01:41.791850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.359 [2024-12-05 14:01:41.791878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.359 [2024-12-05 14:01:41.791894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.359 [2024-12-05 14:01:41.792110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.359 [2024-12-05 14:01:41.792313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.359 [2024-12-05 14:01:41.792332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.359 [2024-12-05 14:01:41.792345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.359 [2024-12-05 14:01:41.792358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.359 [2024-12-05 14:01:41.804614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.359 [2024-12-05 14:01:41.805043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.359 [2024-12-05 14:01:41.805077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.359 [2024-12-05 14:01:41.805095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.359 [2024-12-05 14:01:41.805331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.359 [2024-12-05 14:01:41.805565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.359 [2024-12-05 14:01:41.805586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.359 [2024-12-05 14:01:41.805601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.359 [2024-12-05 14:01:41.805614] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.359 [2024-12-05 14:01:41.817852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.359 [2024-12-05 14:01:41.818258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.359 [2024-12-05 14:01:41.818287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.359 [2024-12-05 14:01:41.818304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.359 [2024-12-05 14:01:41.818559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.359 [2024-12-05 14:01:41.818802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.359 [2024-12-05 14:01:41.818822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.359 [2024-12-05 14:01:41.818835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.359 [2024-12-05 14:01:41.818847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.359 [2024-12-05 14:01:41.831022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.359 [2024-12-05 14:01:41.831395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.359 [2024-12-05 14:01:41.831429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.359 [2024-12-05 14:01:41.831461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.359 [2024-12-05 14:01:41.831688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.359 [2024-12-05 14:01:41.831909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.359 [2024-12-05 14:01:41.831929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.359 [2024-12-05 14:01:41.831943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.359 [2024-12-05 14:01:41.831956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.359 [2024-12-05 14:01:41.844106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.359 [2024-12-05 14:01:41.844513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.359 [2024-12-05 14:01:41.844542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.359 [2024-12-05 14:01:41.844558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.359 [2024-12-05 14:01:41.844800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.359 [2024-12-05 14:01:41.845007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.359 [2024-12-05 14:01:41.845027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.359 [2024-12-05 14:01:41.845040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.359 [2024-12-05 14:01:41.845054] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.359 [2024-12-05 14:01:41.857267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.359 [2024-12-05 14:01:41.857683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.359 [2024-12-05 14:01:41.857712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.359 [2024-12-05 14:01:41.857728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.359 [2024-12-05 14:01:41.857965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.359 [2024-12-05 14:01:41.858169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.359 [2024-12-05 14:01:41.858190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.359 [2024-12-05 14:01:41.858203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.359 [2024-12-05 14:01:41.858215] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.359 [2024-12-05 14:01:41.870292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.359 [2024-12-05 14:01:41.870682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.360 [2024-12-05 14:01:41.870711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.360 [2024-12-05 14:01:41.870727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.360 [2024-12-05 14:01:41.870945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.360 [2024-12-05 14:01:41.871151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.360 [2024-12-05 14:01:41.871171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.360 [2024-12-05 14:01:41.871186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.360 [2024-12-05 14:01:41.871199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.620 [2024-12-05 14:01:41.883700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.620 [2024-12-05 14:01:41.884058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.620 [2024-12-05 14:01:41.884087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.620 [2024-12-05 14:01:41.884103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.620 [2024-12-05 14:01:41.884320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.620 [2024-12-05 14:01:41.884578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.620 [2024-12-05 14:01:41.884606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.620 [2024-12-05 14:01:41.884621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.620 [2024-12-05 14:01:41.884635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.620 [2024-12-05 14:01:41.896796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.620 [2024-12-05 14:01:41.897115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.620 [2024-12-05 14:01:41.897184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.620 [2024-12-05 14:01:41.897201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.620 [2024-12-05 14:01:41.897444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.620 [2024-12-05 14:01:41.897640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.620 [2024-12-05 14:01:41.897660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.620 [2024-12-05 14:01:41.897673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.620 [2024-12-05 14:01:41.897686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.620 [2024-12-05 14:01:41.909938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.620 [2024-12-05 14:01:41.910347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.620 [2024-12-05 14:01:41.910376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.620 [2024-12-05 14:01:41.910394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.620 [2024-12-05 14:01:41.910658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.620 [2024-12-05 14:01:41.910882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.620 [2024-12-05 14:01:41.910902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.620 [2024-12-05 14:01:41.910916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.620 [2024-12-05 14:01:41.910929] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.620 [2024-12-05 14:01:41.923014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.620 [2024-12-05 14:01:41.923429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.620 [2024-12-05 14:01:41.923458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.620 [2024-12-05 14:01:41.923474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.620 [2024-12-05 14:01:41.923712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.620 [2024-12-05 14:01:41.923916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.620 [2024-12-05 14:01:41.923936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.620 [2024-12-05 14:01:41.923949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.620 [2024-12-05 14:01:41.923967] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.620 [2024-12-05 14:01:41.936006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.620 [2024-12-05 14:01:41.936351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.620 [2024-12-05 14:01:41.936381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.620 [2024-12-05 14:01:41.936398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.620 [2024-12-05 14:01:41.936666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.620 [2024-12-05 14:01:41.936892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.620 [2024-12-05 14:01:41.936912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.620 [2024-12-05 14:01:41.936925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.620 [2024-12-05 14:01:41.936938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.620 [2024-12-05 14:01:41.949175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.620 [2024-12-05 14:01:41.949521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.620 [2024-12-05 14:01:41.949550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.620 [2024-12-05 14:01:41.949567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.620 [2024-12-05 14:01:41.949798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.620 [2024-12-05 14:01:41.949987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.620 [2024-12-05 14:01:41.950008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.620 [2024-12-05 14:01:41.950021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.620 [2024-12-05 14:01:41.950034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.620 [2024-12-05 14:01:41.962264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.620 [2024-12-05 14:01:41.962620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.620 [2024-12-05 14:01:41.962649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.620 [2024-12-05 14:01:41.962666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.620 [2024-12-05 14:01:41.962903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.620 [2024-12-05 14:01:41.963107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.620 [2024-12-05 14:01:41.963128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.620 [2024-12-05 14:01:41.963141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.620 [2024-12-05 14:01:41.963154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.620 [2024-12-05 14:01:41.975398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.620 [2024-12-05 14:01:41.975793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.621 [2024-12-05 14:01:41.975839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.621 [2024-12-05 14:01:41.975868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.621 [2024-12-05 14:01:41.976163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.621 [2024-12-05 14:01:41.976431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.621 [2024-12-05 14:01:41.976473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.621 [2024-12-05 14:01:41.976497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.621 [2024-12-05 14:01:41.976519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.621 [2024-12-05 14:01:41.989618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.621 [2024-12-05 14:01:41.990067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.621 [2024-12-05 14:01:41.990099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.621 [2024-12-05 14:01:41.990117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.621 [2024-12-05 14:01:41.990355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.621 [2024-12-05 14:01:41.990592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.621 [2024-12-05 14:01:41.990614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.621 [2024-12-05 14:01:41.990628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.621 [2024-12-05 14:01:41.990642] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.621 [2024-12-05 14:01:42.002940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.621 [2024-12-05 14:01:42.003352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.621 [2024-12-05 14:01:42.003382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.621 [2024-12-05 14:01:42.003414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.621 [2024-12-05 14:01:42.003671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.621 [2024-12-05 14:01:42.003878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.621 [2024-12-05 14:01:42.003899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.621 [2024-12-05 14:01:42.003912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.621 [2024-12-05 14:01:42.003925] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.621 [2024-12-05 14:01:42.016032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.621 [2024-12-05 14:01:42.016438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.621 [2024-12-05 14:01:42.016467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.621 [2024-12-05 14:01:42.016484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.621 [2024-12-05 14:01:42.016726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.621 [2024-12-05 14:01:42.016931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.621 [2024-12-05 14:01:42.016952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.621 [2024-12-05 14:01:42.016965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.621 [2024-12-05 14:01:42.016977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.621 [2024-12-05 14:01:42.029214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.621 [2024-12-05 14:01:42.029568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.621 [2024-12-05 14:01:42.029598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.621 [2024-12-05 14:01:42.029614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.621 [2024-12-05 14:01:42.029852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.621 [2024-12-05 14:01:42.030055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.621 [2024-12-05 14:01:42.030075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.621 [2024-12-05 14:01:42.030088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.621 [2024-12-05 14:01:42.030101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.621 [2024-12-05 14:01:42.042307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.621 [2024-12-05 14:01:42.042720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.621 [2024-12-05 14:01:42.042749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.621 [2024-12-05 14:01:42.042766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.621 [2024-12-05 14:01:42.043003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.621 [2024-12-05 14:01:42.043192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.621 [2024-12-05 14:01:42.043212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.621 [2024-12-05 14:01:42.043225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.621 [2024-12-05 14:01:42.043238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.621 [2024-12-05 14:01:42.055520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.621 [2024-12-05 14:01:42.055893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.621 [2024-12-05 14:01:42.055921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.621 [2024-12-05 14:01:42.055938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.621 [2024-12-05 14:01:42.056172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.621 [2024-12-05 14:01:42.056378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.621 [2024-12-05 14:01:42.056402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.621 [2024-12-05 14:01:42.056427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.621 [2024-12-05 14:01:42.056457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.621 [2024-12-05 14:01:42.068758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.621 [2024-12-05 14:01:42.069081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.621 [2024-12-05 14:01:42.069111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.621 [2024-12-05 14:01:42.069127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.621 [2024-12-05 14:01:42.069345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.621 [2024-12-05 14:01:42.069585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.621 [2024-12-05 14:01:42.069606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.621 [2024-12-05 14:01:42.069619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.621 [2024-12-05 14:01:42.069632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.621 [2024-12-05 14:01:42.081873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.621 [2024-12-05 14:01:42.082218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.621 [2024-12-05 14:01:42.082247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.621 [2024-12-05 14:01:42.082264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.621 [2024-12-05 14:01:42.082515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.621 [2024-12-05 14:01:42.082730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.621 [2024-12-05 14:01:42.082749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.621 [2024-12-05 14:01:42.082763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.621 [2024-12-05 14:01:42.082776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.621 [2024-12-05 14:01:42.094952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.621 [2024-12-05 14:01:42.095299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.621 [2024-12-05 14:01:42.095328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.621 [2024-12-05 14:01:42.095344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.621 [2024-12-05 14:01:42.095612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.621 [2024-12-05 14:01:42.095838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.621 [2024-12-05 14:01:42.095859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.621 [2024-12-05 14:01:42.095872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.621 [2024-12-05 14:01:42.095890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.621 [2024-12-05 14:01:42.108106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.622 [2024-12-05 14:01:42.108396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.622 [2024-12-05 14:01:42.108446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.622 [2024-12-05 14:01:42.108464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.622 [2024-12-05 14:01:42.108682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.622 [2024-12-05 14:01:42.108888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.622 [2024-12-05 14:01:42.108908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.622 [2024-12-05 14:01:42.108920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.622 [2024-12-05 14:01:42.108932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.622 [2024-12-05 14:01:42.121083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.622 [2024-12-05 14:01:42.121487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.622 [2024-12-05 14:01:42.121516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.622 [2024-12-05 14:01:42.121533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.622 [2024-12-05 14:01:42.121770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.622 [2024-12-05 14:01:42.121976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.622 [2024-12-05 14:01:42.121996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.622 [2024-12-05 14:01:42.122009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.622 [2024-12-05 14:01:42.122022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.622 [2024-12-05 14:01:42.134250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.622 [2024-12-05 14:01:42.134629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.622 [2024-12-05 14:01:42.134658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.622 [2024-12-05 14:01:42.134674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.622 [2024-12-05 14:01:42.134893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.622 [2024-12-05 14:01:42.135099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.622 [2024-12-05 14:01:42.135119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.622 [2024-12-05 14:01:42.135133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.622 [2024-12-05 14:01:42.135145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.883 [2024-12-05 14:01:42.147590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.883 [2024-12-05 14:01:42.147921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.883 [2024-12-05 14:01:42.147954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.883 [2024-12-05 14:01:42.147970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.883 [2024-12-05 14:01:42.148191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.883 [2024-12-05 14:01:42.148414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.883 [2024-12-05 14:01:42.148462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.883 [2024-12-05 14:01:42.148476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.883 [2024-12-05 14:01:42.148504] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.883 [2024-12-05 14:01:42.160703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.883 [2024-12-05 14:01:42.161077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.883 [2024-12-05 14:01:42.161105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.883 [2024-12-05 14:01:42.161121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.883 [2024-12-05 14:01:42.161339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.883 [2024-12-05 14:01:42.161575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.883 [2024-12-05 14:01:42.161594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.883 [2024-12-05 14:01:42.161609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.883 [2024-12-05 14:01:42.161622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.883 [2024-12-05 14:01:42.174028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.883 [2024-12-05 14:01:42.174445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.883 [2024-12-05 14:01:42.174494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.883 [2024-12-05 14:01:42.174512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.883 [2024-12-05 14:01:42.174766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.883 [2024-12-05 14:01:42.174955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.883 [2024-12-05 14:01:42.174975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.883 [2024-12-05 14:01:42.174988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.883 [2024-12-05 14:01:42.175000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.883 [2024-12-05 14:01:42.187058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.883 [2024-12-05 14:01:42.187354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.883 [2024-12-05 14:01:42.187453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.883 [2024-12-05 14:01:42.187472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.883 [2024-12-05 14:01:42.187708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.883 [2024-12-05 14:01:42.187914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.883 [2024-12-05 14:01:42.187934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.883 [2024-12-05 14:01:42.187947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.883 [2024-12-05 14:01:42.187960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.883 5372.50 IOPS, 20.99 MiB/s [2024-12-05T13:01:42.409Z] [2024-12-05 14:01:42.200179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.883 [2024-12-05 14:01:42.200523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.883 [2024-12-05 14:01:42.200553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.883 [2024-12-05 14:01:42.200570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.883 [2024-12-05 14:01:42.200806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.883 [2024-12-05 14:01:42.201010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.883 [2024-12-05 14:01:42.201031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.883 [2024-12-05 14:01:42.201044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.883 [2024-12-05 14:01:42.201056] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.883 [2024-12-05 14:01:42.213263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.883 [2024-12-05 14:01:42.213705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.883 [2024-12-05 14:01:42.213736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.883 [2024-12-05 14:01:42.213768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.883 [2024-12-05 14:01:42.214006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.883 [2024-12-05 14:01:42.214210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.883 [2024-12-05 14:01:42.214230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.883 [2024-12-05 14:01:42.214244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.883 [2024-12-05 14:01:42.214257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.883 [2024-12-05 14:01:42.226541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.883 [2024-12-05 14:01:42.226998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.883 [2024-12-05 14:01:42.227035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.884 [2024-12-05 14:01:42.227062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.884 [2024-12-05 14:01:42.227362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.884 [2024-12-05 14:01:42.227655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.884 [2024-12-05 14:01:42.227691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.884 [2024-12-05 14:01:42.227713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.884 [2024-12-05 14:01:42.227734] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.884 [2024-12-05 14:01:42.240879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.884 [2024-12-05 14:01:42.241345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.884 [2024-12-05 14:01:42.241399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.884 [2024-12-05 14:01:42.241442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.884 [2024-12-05 14:01:42.241710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.884 [2024-12-05 14:01:42.241932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.884 [2024-12-05 14:01:42.241952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.884 [2024-12-05 14:01:42.241965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.884 [2024-12-05 14:01:42.241978] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.884 [2024-12-05 14:01:42.254212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.884 [2024-12-05 14:01:42.254570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.884 [2024-12-05 14:01:42.254600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.884 [2024-12-05 14:01:42.254617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.884 [2024-12-05 14:01:42.254849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.884 [2024-12-05 14:01:42.255080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.884 [2024-12-05 14:01:42.255121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.884 [2024-12-05 14:01:42.255135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.884 [2024-12-05 14:01:42.255149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.884 [2024-12-05 14:01:42.267500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.884 [2024-12-05 14:01:42.267922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.884 [2024-12-05 14:01:42.267975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.884 [2024-12-05 14:01:42.267992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.884 [2024-12-05 14:01:42.268236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.884 [2024-12-05 14:01:42.268455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.884 [2024-12-05 14:01:42.268490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.884 [2024-12-05 14:01:42.268504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.884 [2024-12-05 14:01:42.268522] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.884 [2024-12-05 14:01:42.280748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.884 [2024-12-05 14:01:42.281216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.884 [2024-12-05 14:01:42.281268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.884 [2024-12-05 14:01:42.281284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.884 [2024-12-05 14:01:42.281549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.884 [2024-12-05 14:01:42.281759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.884 [2024-12-05 14:01:42.281779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.884 [2024-12-05 14:01:42.281792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.884 [2024-12-05 14:01:42.281805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.884 [2024-12-05 14:01:42.293960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.884 [2024-12-05 14:01:42.294305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.884 [2024-12-05 14:01:42.294333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.884 [2024-12-05 14:01:42.294350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.884 [2024-12-05 14:01:42.294614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.884 [2024-12-05 14:01:42.294823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.884 [2024-12-05 14:01:42.294842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.884 [2024-12-05 14:01:42.294855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.884 [2024-12-05 14:01:42.294867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.884 [2024-12-05 14:01:42.307219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.884 [2024-12-05 14:01:42.307570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.884 [2024-12-05 14:01:42.307599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.884 [2024-12-05 14:01:42.307617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.884 [2024-12-05 14:01:42.307861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.884 [2024-12-05 14:01:42.308065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.884 [2024-12-05 14:01:42.308085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.884 [2024-12-05 14:01:42.308097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.884 [2024-12-05 14:01:42.308110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.884 [2024-12-05 14:01:42.320401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.884 [2024-12-05 14:01:42.320780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.884 [2024-12-05 14:01:42.320809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.884 [2024-12-05 14:01:42.320826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.884 [2024-12-05 14:01:42.321062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.884 [2024-12-05 14:01:42.321251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.884 [2024-12-05 14:01:42.321271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.884 [2024-12-05 14:01:42.321283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.884 [2024-12-05 14:01:42.321296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.884 [2024-12-05 14:01:42.333725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.884 [2024-12-05 14:01:42.334147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.884 [2024-12-05 14:01:42.334176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.884 [2024-12-05 14:01:42.334192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.884 [2024-12-05 14:01:42.334438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.884 [2024-12-05 14:01:42.334650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.884 [2024-12-05 14:01:42.334670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.884 [2024-12-05 14:01:42.334683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.884 [2024-12-05 14:01:42.334696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.884 [2024-12-05 14:01:42.346882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.884 [2024-12-05 14:01:42.347224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.884 [2024-12-05 14:01:42.347253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.884 [2024-12-05 14:01:42.347269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.884 [2024-12-05 14:01:42.347518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.884 [2024-12-05 14:01:42.347719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.884 [2024-12-05 14:01:42.347753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.884 [2024-12-05 14:01:42.347767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.884 [2024-12-05 14:01:42.347779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.884 [2024-12-05 14:01:42.360147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.884 [2024-12-05 14:01:42.360556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.884 [2024-12-05 14:01:42.360586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.884 [2024-12-05 14:01:42.360603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.884 [2024-12-05 14:01:42.360849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.884 [2024-12-05 14:01:42.361054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.884 [2024-12-05 14:01:42.361073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.884 [2024-12-05 14:01:42.361086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.884 [2024-12-05 14:01:42.361098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.884 [2024-12-05 14:01:42.373391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.884 [2024-12-05 14:01:42.373724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.884 [2024-12-05 14:01:42.373753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.884 [2024-12-05 14:01:42.373770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.884 [2024-12-05 14:01:42.373990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.884 [2024-12-05 14:01:42.374196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.884 [2024-12-05 14:01:42.374217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.884 [2024-12-05 14:01:42.374231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.884 [2024-12-05 14:01:42.374244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.884 [2024-12-05 14:01:42.386661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.884 [2024-12-05 14:01:42.386998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.884 [2024-12-05 14:01:42.387027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.884 [2024-12-05 14:01:42.387044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.884 [2024-12-05 14:01:42.387263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.884 [2024-12-05 14:01:42.387513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.884 [2024-12-05 14:01:42.387534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.885 [2024-12-05 14:01:42.387548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.885 [2024-12-05 14:01:42.387561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.885 [2024-12-05 14:01:42.399791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.885 [2024-12-05 14:01:42.400146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.885 [2024-12-05 14:01:42.400189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:10.885 [2024-12-05 14:01:42.400205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:10.885 [2024-12-05 14:01:42.400433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:10.885 [2024-12-05 14:01:42.400634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.885 [2024-12-05 14:01:42.400662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.885 [2024-12-05 14:01:42.400677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.885 [2024-12-05 14:01:42.400690] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.146 [2024-12-05 14:01:42.413063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.146 [2024-12-05 14:01:42.413437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-12-05 14:01:42.413466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.146 [2024-12-05 14:01:42.413484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.146 [2024-12-05 14:01:42.413718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.146 [2024-12-05 14:01:42.413906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.146 [2024-12-05 14:01:42.413926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.146 [2024-12-05 14:01:42.413939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.146 [2024-12-05 14:01:42.413953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.146 [2024-12-05 14:01:42.426322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.146 [2024-12-05 14:01:42.426737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-12-05 14:01:42.426767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.146 [2024-12-05 14:01:42.426783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.146 [2024-12-05 14:01:42.427020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.146 [2024-12-05 14:01:42.427224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.146 [2024-12-05 14:01:42.427245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.146 [2024-12-05 14:01:42.427258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.146 [2024-12-05 14:01:42.427271] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.146 [2024-12-05 14:01:42.439451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.146 [2024-12-05 14:01:42.439862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-12-05 14:01:42.439890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.146 [2024-12-05 14:01:42.439908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.146 [2024-12-05 14:01:42.440144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.146 [2024-12-05 14:01:42.440350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.146 [2024-12-05 14:01:42.440370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.146 [2024-12-05 14:01:42.440383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.146 [2024-12-05 14:01:42.440401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.146 [2024-12-05 14:01:42.452619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.146 [2024-12-05 14:01:42.452950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-12-05 14:01:42.452978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.146 [2024-12-05 14:01:42.452995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.146 [2024-12-05 14:01:42.453215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.146 [2024-12-05 14:01:42.453447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.146 [2024-12-05 14:01:42.453468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.146 [2024-12-05 14:01:42.453481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.146 [2024-12-05 14:01:42.453494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.146 [2024-12-05 14:01:42.465887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.146 [2024-12-05 14:01:42.466292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-12-05 14:01:42.466320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.146 [2024-12-05 14:01:42.466337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.146 [2024-12-05 14:01:42.466601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.146 [2024-12-05 14:01:42.466831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.146 [2024-12-05 14:01:42.466851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.146 [2024-12-05 14:01:42.466864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.146 [2024-12-05 14:01:42.466876] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.146 [2024-12-05 14:01:42.479149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.146 [2024-12-05 14:01:42.479568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-12-05 14:01:42.479607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.146 [2024-12-05 14:01:42.479636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.146 [2024-12-05 14:01:42.479936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.146 [2024-12-05 14:01:42.480191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.146 [2024-12-05 14:01:42.480220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.146 [2024-12-05 14:01:42.480241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.146 [2024-12-05 14:01:42.480263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.146 [2024-12-05 14:01:42.493283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.146 [2024-12-05 14:01:42.493722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-12-05 14:01:42.493798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.146 [2024-12-05 14:01:42.493817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.146 [2024-12-05 14:01:42.494048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.146 [2024-12-05 14:01:42.494252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.146 [2024-12-05 14:01:42.494273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.146 [2024-12-05 14:01:42.494285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.146 [2024-12-05 14:01:42.494298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.146 [2024-12-05 14:01:42.506485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.146 [2024-12-05 14:01:42.506818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-12-05 14:01:42.506862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.146 [2024-12-05 14:01:42.506879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.146 [2024-12-05 14:01:42.507099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.146 [2024-12-05 14:01:42.507305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.146 [2024-12-05 14:01:42.507325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.146 [2024-12-05 14:01:42.507339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.146 [2024-12-05 14:01:42.507352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.146 [2024-12-05 14:01:42.519523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.146 [2024-12-05 14:01:42.519950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-12-05 14:01:42.519979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.146 [2024-12-05 14:01:42.519996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.146 [2024-12-05 14:01:42.520228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.146 [2024-12-05 14:01:42.520442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.146 [2024-12-05 14:01:42.520466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.147 [2024-12-05 14:01:42.520479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.147 [2024-12-05 14:01:42.520506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.147 [2024-12-05 14:01:42.532776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.147 [2024-12-05 14:01:42.533116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-12-05 14:01:42.533143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.147 [2024-12-05 14:01:42.533159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.147 [2024-12-05 14:01:42.533374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.147 [2024-12-05 14:01:42.533611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.147 [2024-12-05 14:01:42.533632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.147 [2024-12-05 14:01:42.533646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.147 [2024-12-05 14:01:42.533669] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.147 [2024-12-05 14:01:42.545992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.147 [2024-12-05 14:01:42.546347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-12-05 14:01:42.546389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.147 [2024-12-05 14:01:42.546406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.147 [2024-12-05 14:01:42.546632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.147 [2024-12-05 14:01:42.546838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.147 [2024-12-05 14:01:42.546857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.147 [2024-12-05 14:01:42.546870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.147 [2024-12-05 14:01:42.546883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.147 [2024-12-05 14:01:42.559220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.147 [2024-12-05 14:01:42.559603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-12-05 14:01:42.559633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.147 [2024-12-05 14:01:42.559650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.147 [2024-12-05 14:01:42.559903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.147 [2024-12-05 14:01:42.560108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.147 [2024-12-05 14:01:42.560127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.147 [2024-12-05 14:01:42.560140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.147 [2024-12-05 14:01:42.560153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.147 [2024-12-05 14:01:42.572287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.147 [2024-12-05 14:01:42.572723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-12-05 14:01:42.572768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.147 [2024-12-05 14:01:42.572785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.147 [2024-12-05 14:01:42.573020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.147 [2024-12-05 14:01:42.573223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.147 [2024-12-05 14:01:42.573248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.147 [2024-12-05 14:01:42.573262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.147 [2024-12-05 14:01:42.573274] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.147 [2024-12-05 14:01:42.585337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.147 [2024-12-05 14:01:42.585755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-12-05 14:01:42.585784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.147 [2024-12-05 14:01:42.585800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.147 [2024-12-05 14:01:42.586033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.147 [2024-12-05 14:01:42.586237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.147 [2024-12-05 14:01:42.586257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.147 [2024-12-05 14:01:42.586270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.147 [2024-12-05 14:01:42.586283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.147 [2024-12-05 14:01:42.598489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.147 [2024-12-05 14:01:42.598787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-12-05 14:01:42.598814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.147 [2024-12-05 14:01:42.598829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.147 [2024-12-05 14:01:42.599041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.147 [2024-12-05 14:01:42.599246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.147 [2024-12-05 14:01:42.599266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.147 [2024-12-05 14:01:42.599280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.147 [2024-12-05 14:01:42.599293] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.147 [2024-12-05 14:01:42.611613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.147 [2024-12-05 14:01:42.612086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-12-05 14:01:42.612135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.147 [2024-12-05 14:01:42.612152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.147 [2024-12-05 14:01:42.612402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.147 [2024-12-05 14:01:42.612643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.147 [2024-12-05 14:01:42.612674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.147 [2024-12-05 14:01:42.612687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.147 [2024-12-05 14:01:42.612707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.147 [2024-12-05 14:01:42.624909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.147 [2024-12-05 14:01:42.625269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-12-05 14:01:42.625298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.147 [2024-12-05 14:01:42.625315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.147 [2024-12-05 14:01:42.625582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.147 [2024-12-05 14:01:42.625796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.147 [2024-12-05 14:01:42.625815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.147 [2024-12-05 14:01:42.625828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.147 [2024-12-05 14:01:42.625840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.147 [2024-12-05 14:01:42.638060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.147 [2024-12-05 14:01:42.638483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-12-05 14:01:42.638511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.147 [2024-12-05 14:01:42.638528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.147 [2024-12-05 14:01:42.638764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.147 [2024-12-05 14:01:42.638970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.147 [2024-12-05 14:01:42.638990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.147 [2024-12-05 14:01:42.639003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.147 [2024-12-05 14:01:42.639015] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.147 [2024-12-05 14:01:42.651341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.147 [2024-12-05 14:01:42.651777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-12-05 14:01:42.651806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.147 [2024-12-05 14:01:42.651823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.147 [2024-12-05 14:01:42.652058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.147 [2024-12-05 14:01:42.652262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.147 [2024-12-05 14:01:42.652283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.147 [2024-12-05 14:01:42.652296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.147 [2024-12-05 14:01:42.652309] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.147 [2024-12-05 14:01:42.664834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.147 [2024-12-05 14:01:42.665311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-12-05 14:01:42.665340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.147 [2024-12-05 14:01:42.665357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.147 [2024-12-05 14:01:42.665615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.147 [2024-12-05 14:01:42.665855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.147 [2024-12-05 14:01:42.665877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.147 [2024-12-05 14:01:42.665890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.147 [2024-12-05 14:01:42.665905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.407 [2024-12-05 14:01:42.678291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.407 [2024-12-05 14:01:42.678635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.407 [2024-12-05 14:01:42.678664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.407 [2024-12-05 14:01:42.678682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.407 [2024-12-05 14:01:42.678936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.407 [2024-12-05 14:01:42.679126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.407 [2024-12-05 14:01:42.679145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.407 [2024-12-05 14:01:42.679158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.407 [2024-12-05 14:01:42.679170] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.407 [2024-12-05 14:01:42.691679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.407 [2024-12-05 14:01:42.692060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.407 [2024-12-05 14:01:42.692089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.407 [2024-12-05 14:01:42.692105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.407 [2024-12-05 14:01:42.692341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.407 [2024-12-05 14:01:42.692606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.407 [2024-12-05 14:01:42.692638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.407 [2024-12-05 14:01:42.692654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.407 [2024-12-05 14:01:42.692669] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.407 [2024-12-05 14:01:42.704993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.407 [2024-12-05 14:01:42.705288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.407 [2024-12-05 14:01:42.705331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.407 [2024-12-05 14:01:42.705348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.407 [2024-12-05 14:01:42.705609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.407 [2024-12-05 14:01:42.705853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.408 [2024-12-05 14:01:42.705873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.408 [2024-12-05 14:01:42.705886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.408 [2024-12-05 14:01:42.705898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.408 [2024-12-05 14:01:42.718253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.408 [2024-12-05 14:01:42.718606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.408 [2024-12-05 14:01:42.718634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.408 [2024-12-05 14:01:42.718651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.408 [2024-12-05 14:01:42.718891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.408 [2024-12-05 14:01:42.719095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.408 [2024-12-05 14:01:42.719114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.408 [2024-12-05 14:01:42.719126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.408 [2024-12-05 14:01:42.719139] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.408 [2024-12-05 14:01:42.731506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.408 [2024-12-05 14:01:42.731912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.408 [2024-12-05 14:01:42.731966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.408 [2024-12-05 14:01:42.731992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.408 [2024-12-05 14:01:42.732288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.408 [2024-12-05 14:01:42.732565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.408 [2024-12-05 14:01:42.732594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.408 [2024-12-05 14:01:42.732616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.408 [2024-12-05 14:01:42.732637] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.408 [2024-12-05 14:01:42.745902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.408 [2024-12-05 14:01:42.746253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.408 [2024-12-05 14:01:42.746284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.408 [2024-12-05 14:01:42.746302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.408 [2024-12-05 14:01:42.746570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.408 [2024-12-05 14:01:42.746800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.408 [2024-12-05 14:01:42.746824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.408 [2024-12-05 14:01:42.746837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.408 [2024-12-05 14:01:42.746850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.408 [2024-12-05 14:01:42.759084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.408 [2024-12-05 14:01:42.759432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.408 [2024-12-05 14:01:42.759462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.408 [2024-12-05 14:01:42.759479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.408 [2024-12-05 14:01:42.759715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.408 [2024-12-05 14:01:42.759919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.408 [2024-12-05 14:01:42.759939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.408 [2024-12-05 14:01:42.759952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.408 [2024-12-05 14:01:42.759963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.408 [2024-12-05 14:01:42.772119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.408 [2024-12-05 14:01:42.772436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.408 [2024-12-05 14:01:42.772464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.408 [2024-12-05 14:01:42.772495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.408 [2024-12-05 14:01:42.772721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.408 [2024-12-05 14:01:42.772944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.408 [2024-12-05 14:01:42.772963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.408 [2024-12-05 14:01:42.772976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.408 [2024-12-05 14:01:42.772988] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.408 [2024-12-05 14:01:42.785169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.408 [2024-12-05 14:01:42.785515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.408 [2024-12-05 14:01:42.785544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.408 [2024-12-05 14:01:42.785561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.408 [2024-12-05 14:01:42.785797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.408 [2024-12-05 14:01:42.786002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.408 [2024-12-05 14:01:42.786021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.408 [2024-12-05 14:01:42.786034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.408 [2024-12-05 14:01:42.786051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.408 [2024-12-05 14:01:42.798309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.408 [2024-12-05 14:01:42.798688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.408 [2024-12-05 14:01:42.798717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.408 [2024-12-05 14:01:42.798733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.408 [2024-12-05 14:01:42.798968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.409 [2024-12-05 14:01:42.799172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.409 [2024-12-05 14:01:42.799192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.409 [2024-12-05 14:01:42.799204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.409 [2024-12-05 14:01:42.799217] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.409 [2024-12-05 14:01:42.811459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.409 [2024-12-05 14:01:42.811771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.409 [2024-12-05 14:01:42.811799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.409 [2024-12-05 14:01:42.811816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.409 [2024-12-05 14:01:42.812033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.409 [2024-12-05 14:01:42.812238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.409 [2024-12-05 14:01:42.812257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.409 [2024-12-05 14:01:42.812269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.409 [2024-12-05 14:01:42.812281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.409 [2024-12-05 14:01:42.824477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.409 [2024-12-05 14:01:42.824815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.409 [2024-12-05 14:01:42.824842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.409 [2024-12-05 14:01:42.824857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.409 [2024-12-05 14:01:42.825068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.409 [2024-12-05 14:01:42.825273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.409 [2024-12-05 14:01:42.825292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.409 [2024-12-05 14:01:42.825304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.409 [2024-12-05 14:01:42.825316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.409 [2024-12-05 14:01:42.837536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.409 [2024-12-05 14:01:42.837946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.409 [2024-12-05 14:01:42.837973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.409 [2024-12-05 14:01:42.837989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.409 [2024-12-05 14:01:42.838220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.409 [2024-12-05 14:01:42.838450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.409 [2024-12-05 14:01:42.838485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.409 [2024-12-05 14:01:42.838499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.409 [2024-12-05 14:01:42.838513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.409 [2024-12-05 14:01:42.850586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.409 [2024-12-05 14:01:42.850992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.409 [2024-12-05 14:01:42.851020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.409 [2024-12-05 14:01:42.851037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.409 [2024-12-05 14:01:42.851275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.409 [2024-12-05 14:01:42.851523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.409 [2024-12-05 14:01:42.851544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.409 [2024-12-05 14:01:42.851557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.409 [2024-12-05 14:01:42.851570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.409 [2024-12-05 14:01:42.863711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.409 [2024-12-05 14:01:42.864051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.409 [2024-12-05 14:01:42.864078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.409 [2024-12-05 14:01:42.864095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.409 [2024-12-05 14:01:42.864332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.409 [2024-12-05 14:01:42.864564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.409 [2024-12-05 14:01:42.864584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.409 [2024-12-05 14:01:42.864597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.409 [2024-12-05 14:01:42.864609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.409 [2024-12-05 14:01:42.876788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.409 [2024-12-05 14:01:42.877099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.409 [2024-12-05 14:01:42.877126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.409 [2024-12-05 14:01:42.877143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.409 [2024-12-05 14:01:42.877367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.409 [2024-12-05 14:01:42.877603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.409 [2024-12-05 14:01:42.877624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.409 [2024-12-05 14:01:42.877638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.409 [2024-12-05 14:01:42.877651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.409 [2024-12-05 14:01:42.889822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.409 [2024-12-05 14:01:42.890224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.409 [2024-12-05 14:01:42.890252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.409 [2024-12-05 14:01:42.890268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.409 [2024-12-05 14:01:42.890516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.409 [2024-12-05 14:01:42.890732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.409 [2024-12-05 14:01:42.890752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.409 [2024-12-05 14:01:42.890766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.409 [2024-12-05 14:01:42.890792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.410 [2024-12-05 14:01:42.903008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.410 [2024-12-05 14:01:42.903364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.410 [2024-12-05 14:01:42.903392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.410 [2024-12-05 14:01:42.903408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.410 [2024-12-05 14:01:42.903675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.410 [2024-12-05 14:01:42.903881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.410 [2024-12-05 14:01:42.903900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.410 [2024-12-05 14:01:42.903913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.410 [2024-12-05 14:01:42.903926] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.410 [2024-12-05 14:01:42.916012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.410 [2024-12-05 14:01:42.916321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.410 [2024-12-05 14:01:42.916349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.410 [2024-12-05 14:01:42.916365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.410 [2024-12-05 14:01:42.916647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.410 [2024-12-05 14:01:42.916875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.410 [2024-12-05 14:01:42.916900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.410 [2024-12-05 14:01:42.916913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.410 [2024-12-05 14:01:42.916925] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.410 [2024-12-05 14:01:42.929308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.410 [2024-12-05 14:01:42.929695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.410 [2024-12-05 14:01:42.929740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.410 [2024-12-05 14:01:42.929757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.410 [2024-12-05 14:01:42.930001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.410 [2024-12-05 14:01:42.930230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.410 [2024-12-05 14:01:42.930250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.410 [2024-12-05 14:01:42.930263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.410 [2024-12-05 14:01:42.930275] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.670 [2024-12-05 14:01:42.942568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.670 [2024-12-05 14:01:42.942928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.670 [2024-12-05 14:01:42.942956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.670 [2024-12-05 14:01:42.942973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.670 [2024-12-05 14:01:42.943210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.670 [2024-12-05 14:01:42.943440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.670 [2024-12-05 14:01:42.943475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.670 [2024-12-05 14:01:42.943489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.670 [2024-12-05 14:01:42.943502] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.670 [2024-12-05 14:01:42.955630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.671 [2024-12-05 14:01:42.955972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-12-05 14:01:42.956000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.671 [2024-12-05 14:01:42.956016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.671 [2024-12-05 14:01:42.956252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.671 [2024-12-05 14:01:42.956498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.671 [2024-12-05 14:01:42.956519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.671 [2024-12-05 14:01:42.956533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.671 [2024-12-05 14:01:42.956550] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.671 [2024-12-05 14:01:42.968732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.671 [2024-12-05 14:01:42.969072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-12-05 14:01:42.969100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.671 [2024-12-05 14:01:42.969116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.671 [2024-12-05 14:01:42.969353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.671 [2024-12-05 14:01:42.969588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.671 [2024-12-05 14:01:42.969609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.671 [2024-12-05 14:01:42.969622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.671 [2024-12-05 14:01:42.969636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.671 [2024-12-05 14:01:42.981863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.671 [2024-12-05 14:01:42.982315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-12-05 14:01:42.982354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.671 [2024-12-05 14:01:42.982381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.671 [2024-12-05 14:01:42.982687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.671 [2024-12-05 14:01:42.982973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.671 [2024-12-05 14:01:42.983002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.671 [2024-12-05 14:01:42.983023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.671 [2024-12-05 14:01:42.983044] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.671 [2024-12-05 14:01:42.995890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.671 [2024-12-05 14:01:42.996323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-12-05 14:01:42.996377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.671 [2024-12-05 14:01:42.996395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.671 [2024-12-05 14:01:42.996695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.671 [2024-12-05 14:01:42.996919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.671 [2024-12-05 14:01:42.996938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.671 [2024-12-05 14:01:42.996951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.671 [2024-12-05 14:01:42.996963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.671 [2024-12-05 14:01:43.009143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.671 [2024-12-05 14:01:43.009566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-12-05 14:01:43.009601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.671 [2024-12-05 14:01:43.009620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.671 [2024-12-05 14:01:43.009862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.671 [2024-12-05 14:01:43.010066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.671 [2024-12-05 14:01:43.010086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.671 [2024-12-05 14:01:43.010098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.671 [2024-12-05 14:01:43.010111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.671 [2024-12-05 14:01:43.022316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.671 [2024-12-05 14:01:43.022717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-12-05 14:01:43.022747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.671 [2024-12-05 14:01:43.022764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.671 [2024-12-05 14:01:43.022995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.671 [2024-12-05 14:01:43.023200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.671 [2024-12-05 14:01:43.023218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.671 [2024-12-05 14:01:43.023232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.671 [2024-12-05 14:01:43.023245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.671 [2024-12-05 14:01:43.035469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.671 [2024-12-05 14:01:43.035778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-12-05 14:01:43.035806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.671 [2024-12-05 14:01:43.035823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.671 [2024-12-05 14:01:43.036041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.671 [2024-12-05 14:01:43.036246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.671 [2024-12-05 14:01:43.036265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.671 [2024-12-05 14:01:43.036278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.671 [2024-12-05 14:01:43.036290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.671 [2024-12-05 14:01:43.048447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.671 [2024-12-05 14:01:43.048794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-12-05 14:01:43.048822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.671 [2024-12-05 14:01:43.048838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.671 [2024-12-05 14:01:43.049083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.671 [2024-12-05 14:01:43.049273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.671 [2024-12-05 14:01:43.049292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.671 [2024-12-05 14:01:43.049305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.671 [2024-12-05 14:01:43.049318] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.671 [2024-12-05 14:01:43.061561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.671 [2024-12-05 14:01:43.061905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-12-05 14:01:43.061933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.671 [2024-12-05 14:01:43.061949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.671 [2024-12-05 14:01:43.062180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.671 [2024-12-05 14:01:43.062384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.671 [2024-12-05 14:01:43.062404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.671 [2024-12-05 14:01:43.062426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.671 [2024-12-05 14:01:43.062457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.671 [2024-12-05 14:01:43.074632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.671 [2024-12-05 14:01:43.075035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.671 [2024-12-05 14:01:43.075063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.671 [2024-12-05 14:01:43.075080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.671 [2024-12-05 14:01:43.075317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.671 [2024-12-05 14:01:43.075549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.671 [2024-12-05 14:01:43.075570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.672 [2024-12-05 14:01:43.075583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.672 [2024-12-05 14:01:43.075596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.672 [2024-12-05 14:01:43.087732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.672 [2024-12-05 14:01:43.088074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-12-05 14:01:43.088103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.672 [2024-12-05 14:01:43.088119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.672 [2024-12-05 14:01:43.088356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.672 [2024-12-05 14:01:43.088593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.672 [2024-12-05 14:01:43.088618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.672 [2024-12-05 14:01:43.088633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.672 [2024-12-05 14:01:43.088646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.672 [2024-12-05 14:01:43.100795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.672 [2024-12-05 14:01:43.101199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-12-05 14:01:43.101228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.672 [2024-12-05 14:01:43.101244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.672 [2024-12-05 14:01:43.101493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.672 [2024-12-05 14:01:43.101704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.672 [2024-12-05 14:01:43.101723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.672 [2024-12-05 14:01:43.101752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.672 [2024-12-05 14:01:43.101765] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.672 [2024-12-05 14:01:43.113908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.672 [2024-12-05 14:01:43.114322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-12-05 14:01:43.114351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.672 [2024-12-05 14:01:43.114367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.672 [2024-12-05 14:01:43.114648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.672 [2024-12-05 14:01:43.114850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.672 [2024-12-05 14:01:43.114884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.672 [2024-12-05 14:01:43.114897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.672 [2024-12-05 14:01:43.114909] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.672 [2024-12-05 14:01:43.127227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.672 [2024-12-05 14:01:43.127641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-12-05 14:01:43.127670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.672 [2024-12-05 14:01:43.127687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.672 [2024-12-05 14:01:43.127954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.672 [2024-12-05 14:01:43.128199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.672 [2024-12-05 14:01:43.128219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.672 [2024-12-05 14:01:43.128231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.672 [2024-12-05 14:01:43.128248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.672 [2024-12-05 14:01:43.140310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.672 [2024-12-05 14:01:43.140646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-12-05 14:01:43.140674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.672 [2024-12-05 14:01:43.140691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.672 [2024-12-05 14:01:43.140928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.672 [2024-12-05 14:01:43.141134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.672 [2024-12-05 14:01:43.141153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.672 [2024-12-05 14:01:43.141166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.672 [2024-12-05 14:01:43.141178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.672 [2024-12-05 14:01:43.153384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.672 [2024-12-05 14:01:43.153703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-12-05 14:01:43.153731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.672 [2024-12-05 14:01:43.153748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.672 [2024-12-05 14:01:43.153966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.672 [2024-12-05 14:01:43.154171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.672 [2024-12-05 14:01:43.154190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.672 [2024-12-05 14:01:43.154203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.672 [2024-12-05 14:01:43.154216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.672 [2024-12-05 14:01:43.166412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.672 [2024-12-05 14:01:43.166873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-12-05 14:01:43.166928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.672 [2024-12-05 14:01:43.166944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.672 [2024-12-05 14:01:43.167185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.672 [2024-12-05 14:01:43.167374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.672 [2024-12-05 14:01:43.167393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.672 [2024-12-05 14:01:43.167406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.672 [2024-12-05 14:01:43.167424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.672 [2024-12-05 14:01:43.179444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.672 [2024-12-05 14:01:43.179849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.672 [2024-12-05 14:01:43.179882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.672 [2024-12-05 14:01:43.179899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.672 [2024-12-05 14:01:43.180135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.672 [2024-12-05 14:01:43.180340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.672 [2024-12-05 14:01:43.180359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.672 [2024-12-05 14:01:43.180372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.672 [2024-12-05 14:01:43.180384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.672 4298.00 IOPS, 16.79 MiB/s [2024-12-05T13:01:43.198Z] [2024-12-05 14:01:43.194451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.932 [2024-12-05 14:01:43.194886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.933 [2024-12-05 14:01:43.194915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.933 [2024-12-05 14:01:43.194933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.933 [2024-12-05 14:01:43.195179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.933 [2024-12-05 14:01:43.195405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.933 [2024-12-05 14:01:43.195436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.933 [2024-12-05 14:01:43.195465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.933 [2024-12-05 14:01:43.195480] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.933 [2024-12-05 14:01:43.207559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.933 [2024-12-05 14:01:43.207951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.933 [2024-12-05 14:01:43.207979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.933 [2024-12-05 14:01:43.207996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.933 [2024-12-05 14:01:43.208233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.933 [2024-12-05 14:01:43.208463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.933 [2024-12-05 14:01:43.208483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.933 [2024-12-05 14:01:43.208497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.933 [2024-12-05 14:01:43.208511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.933 [2024-12-05 14:01:43.220788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.933 [2024-12-05 14:01:43.221202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.933 [2024-12-05 14:01:43.221230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.933 [2024-12-05 14:01:43.221247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.933 [2024-12-05 14:01:43.221501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.933 [2024-12-05 14:01:43.221716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.933 [2024-12-05 14:01:43.221736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.933 [2024-12-05 14:01:43.221749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.933 [2024-12-05 14:01:43.221776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.933 [2024-12-05 14:01:43.233877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.933 [2024-12-05 14:01:43.234277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.933 [2024-12-05 14:01:43.234313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.933 [2024-12-05 14:01:43.234339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.933 [2024-12-05 14:01:43.234655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.933 [2024-12-05 14:01:43.234930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.933 [2024-12-05 14:01:43.234959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.933 [2024-12-05 14:01:43.234980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.933 [2024-12-05 14:01:43.234999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.933 [2024-12-05 14:01:43.248175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.933 [2024-12-05 14:01:43.248540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.933 [2024-12-05 14:01:43.248604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.933 [2024-12-05 14:01:43.248623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.933 [2024-12-05 14:01:43.248876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.933 [2024-12-05 14:01:43.249067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.933 [2024-12-05 14:01:43.249086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.933 [2024-12-05 14:01:43.249099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.933 [2024-12-05 14:01:43.249111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.933 [2024-12-05 14:01:43.261379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.933 [2024-12-05 14:01:43.261772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.933 [2024-12-05 14:01:43.261801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.933 [2024-12-05 14:01:43.261818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.933 [2024-12-05 14:01:43.262055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.933 [2024-12-05 14:01:43.262245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.933 [2024-12-05 14:01:43.262269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.933 [2024-12-05 14:01:43.262282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.933 [2024-12-05 14:01:43.262295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.933 [2024-12-05 14:01:43.274551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.933 [2024-12-05 14:01:43.274864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.933 [2024-12-05 14:01:43.274891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.933 [2024-12-05 14:01:43.274908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.933 [2024-12-05 14:01:43.275128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.933 [2024-12-05 14:01:43.275334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.933 [2024-12-05 14:01:43.275353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.933 [2024-12-05 14:01:43.275366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.933 [2024-12-05 14:01:43.275378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.933 [2024-12-05 14:01:43.287612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.933 [2024-12-05 14:01:43.288018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.933 [2024-12-05 14:01:43.288046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.933 [2024-12-05 14:01:43.288063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.933 [2024-12-05 14:01:43.288300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.933 [2024-12-05 14:01:43.288531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.933 [2024-12-05 14:01:43.288551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.933 [2024-12-05 14:01:43.288564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.933 [2024-12-05 14:01:43.288577] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.933 [2024-12-05 14:01:43.300707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.933 [2024-12-05 14:01:43.301048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.933 [2024-12-05 14:01:43.301076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.933 [2024-12-05 14:01:43.301093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.933 [2024-12-05 14:01:43.301329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.933 [2024-12-05 14:01:43.301580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.933 [2024-12-05 14:01:43.301601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.933 [2024-12-05 14:01:43.301615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.933 [2024-12-05 14:01:43.301632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.933 [2024-12-05 14:01:43.313806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.933 [2024-12-05 14:01:43.314100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.933 [2024-12-05 14:01:43.314143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.933 [2024-12-05 14:01:43.314159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.933 [2024-12-05 14:01:43.314376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.933 [2024-12-05 14:01:43.314612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.933 [2024-12-05 14:01:43.314633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.933 [2024-12-05 14:01:43.314647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.934 [2024-12-05 14:01:43.314659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.934 [2024-12-05 14:01:43.326805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.934 [2024-12-05 14:01:43.327171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.934 [2024-12-05 14:01:43.327212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.934 [2024-12-05 14:01:43.327228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.934 [2024-12-05 14:01:43.327457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.934 [2024-12-05 14:01:43.327669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.934 [2024-12-05 14:01:43.327689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.934 [2024-12-05 14:01:43.327702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.934 [2024-12-05 14:01:43.327715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.934 [2024-12-05 14:01:43.339834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.934 [2024-12-05 14:01:43.340174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.934 [2024-12-05 14:01:43.340202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.934 [2024-12-05 14:01:43.340219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.934 [2024-12-05 14:01:43.340467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.934 [2024-12-05 14:01:43.340663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.934 [2024-12-05 14:01:43.340682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.934 [2024-12-05 14:01:43.340695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.934 [2024-12-05 14:01:43.340708] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.934 [2024-12-05 14:01:43.352877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.934 [2024-12-05 14:01:43.353285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.934 [2024-12-05 14:01:43.353314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.934 [2024-12-05 14:01:43.353330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.934 [2024-12-05 14:01:43.353596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.934 [2024-12-05 14:01:43.353809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.934 [2024-12-05 14:01:43.353828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.934 [2024-12-05 14:01:43.353840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.934 [2024-12-05 14:01:43.353853] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.934 [2024-12-05 14:01:43.365976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.934 [2024-12-05 14:01:43.366438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.934 [2024-12-05 14:01:43.366489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.934 [2024-12-05 14:01:43.366506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.934 [2024-12-05 14:01:43.366745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.934 [2024-12-05 14:01:43.366934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.934 [2024-12-05 14:01:43.366953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.934 [2024-12-05 14:01:43.366965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.934 [2024-12-05 14:01:43.366978] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.934 [2024-12-05 14:01:43.379278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.934 [2024-12-05 14:01:43.379635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.934 [2024-12-05 14:01:43.379665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.934 [2024-12-05 14:01:43.379682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.934 [2024-12-05 14:01:43.379949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.934 [2024-12-05 14:01:43.380139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.934 [2024-12-05 14:01:43.380158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.934 [2024-12-05 14:01:43.380171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.934 [2024-12-05 14:01:43.380183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.934 [2024-12-05 14:01:43.392685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.934 [2024-12-05 14:01:43.393065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.934 [2024-12-05 14:01:43.393093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.934 [2024-12-05 14:01:43.393110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.934 [2024-12-05 14:01:43.393346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.934 [2024-12-05 14:01:43.393585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.934 [2024-12-05 14:01:43.393607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.934 [2024-12-05 14:01:43.393622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.934 [2024-12-05 14:01:43.393635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.934 [2024-12-05 14:01:43.405883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.934 [2024-12-05 14:01:43.406289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.934 [2024-12-05 14:01:43.406317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.934 [2024-12-05 14:01:43.406334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.934 [2024-12-05 14:01:43.406599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.934 [2024-12-05 14:01:43.406808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.934 [2024-12-05 14:01:43.406827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.934 [2024-12-05 14:01:43.406840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.934 [2024-12-05 14:01:43.406853] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.934 [2024-12-05 14:01:43.419557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.934 [2024-12-05 14:01:43.420022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.934 [2024-12-05 14:01:43.420051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.934 [2024-12-05 14:01:43.420068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.934 [2024-12-05 14:01:43.420313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.934 [2024-12-05 14:01:43.420567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.934 [2024-12-05 14:01:43.420590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.934 [2024-12-05 14:01:43.420604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.934 [2024-12-05 14:01:43.420619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.934 [2024-12-05 14:01:43.433194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.934 [2024-12-05 14:01:43.433532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.934 [2024-12-05 14:01:43.433561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.934 [2024-12-05 14:01:43.433578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.934 [2024-12-05 14:01:43.433809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.934 [2024-12-05 14:01:43.434027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.934 [2024-12-05 14:01:43.434067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.934 [2024-12-05 14:01:43.434082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.934 [2024-12-05 14:01:43.434096] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.934 [2024-12-05 14:01:43.446766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.934 [2024-12-05 14:01:43.447243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.934 [2024-12-05 14:01:43.447303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:11.934 [2024-12-05 14:01:43.447340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:11.934 [2024-12-05 14:01:43.447566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:11.934 [2024-12-05 14:01:43.447815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.934 [2024-12-05 14:01:43.447836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.934 [2024-12-05 14:01:43.447849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.935 [2024-12-05 14:01:43.447862] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.194 [2024-12-05 14:01:43.460571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.194 [2024-12-05 14:01:43.460909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.194 [2024-12-05 14:01:43.460938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.194 [2024-12-05 14:01:43.460955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.194 [2024-12-05 14:01:43.461187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.194 [2024-12-05 14:01:43.461402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.194 [2024-12-05 14:01:43.461451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.194 [2024-12-05 14:01:43.461467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.194 [2024-12-05 14:01:43.461481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.194 [2024-12-05 14:01:43.474289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.194 [2024-12-05 14:01:43.474626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.194 [2024-12-05 14:01:43.474656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.194 [2024-12-05 14:01:43.474673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.194 [2024-12-05 14:01:43.474904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.194 [2024-12-05 14:01:43.475137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.194 [2024-12-05 14:01:43.475157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.194 [2024-12-05 14:01:43.475171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.194 [2024-12-05 14:01:43.475204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.194 [2024-12-05 14:01:43.487733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.194 [2024-12-05 14:01:43.488145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.194 [2024-12-05 14:01:43.488183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.194 [2024-12-05 14:01:43.488224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.194 [2024-12-05 14:01:43.488527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.194 [2024-12-05 14:01:43.488827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.194 [2024-12-05 14:01:43.488853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.194 [2024-12-05 14:01:43.488867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.194 [2024-12-05 14:01:43.488894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.194 [2024-12-05 14:01:43.501788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.194 [2024-12-05 14:01:43.502151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.194 [2024-12-05 14:01:43.502201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.194 [2024-12-05 14:01:43.502218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.194 [2024-12-05 14:01:43.502459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.194 [2024-12-05 14:01:43.502694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.194 [2024-12-05 14:01:43.502716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.194 [2024-12-05 14:01:43.502731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.194 [2024-12-05 14:01:43.502745] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.194 [2024-12-05 14:01:43.515079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.194 [2024-12-05 14:01:43.515429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.194 [2024-12-05 14:01:43.515474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.194 [2024-12-05 14:01:43.515492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.194 [2024-12-05 14:01:43.515723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.195 [2024-12-05 14:01:43.515931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.195 [2024-12-05 14:01:43.515951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.195 [2024-12-05 14:01:43.515963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.195 [2024-12-05 14:01:43.515975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.195 [2024-12-05 14:01:43.528262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.195 [2024-12-05 14:01:43.528794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.195 [2024-12-05 14:01:43.528823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.195 [2024-12-05 14:01:43.528839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.195 [2024-12-05 14:01:43.529086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.195 [2024-12-05 14:01:43.529275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.195 [2024-12-05 14:01:43.529294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.195 [2024-12-05 14:01:43.529307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.195 [2024-12-05 14:01:43.529320] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.195 [2024-12-05 14:01:43.541492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.195 [2024-12-05 14:01:43.541845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.195 [2024-12-05 14:01:43.541894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.195 [2024-12-05 14:01:43.541910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.195 [2024-12-05 14:01:43.542146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.195 [2024-12-05 14:01:43.542350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.195 [2024-12-05 14:01:43.542369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.195 [2024-12-05 14:01:43.542382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.195 [2024-12-05 14:01:43.542394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.195 [2024-12-05 14:01:43.554648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.195 [2024-12-05 14:01:43.555063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.195 [2024-12-05 14:01:43.555111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.195 [2024-12-05 14:01:43.555128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.195 [2024-12-05 14:01:43.555365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.195 [2024-12-05 14:01:43.555600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.195 [2024-12-05 14:01:43.555622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.195 [2024-12-05 14:01:43.555635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.195 [2024-12-05 14:01:43.555648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.195 [2024-12-05 14:01:43.567657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.195 [2024-12-05 14:01:43.568009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.195 [2024-12-05 14:01:43.568036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.195 [2024-12-05 14:01:43.568052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.195 [2024-12-05 14:01:43.568275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.195 [2024-12-05 14:01:43.568524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.195 [2024-12-05 14:01:43.568545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.195 [2024-12-05 14:01:43.568559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.195 [2024-12-05 14:01:43.568571] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.195 [2024-12-05 14:01:43.580744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.195 [2024-12-05 14:01:43.581150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.195 [2024-12-05 14:01:43.581178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.195 [2024-12-05 14:01:43.581195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.195 [2024-12-05 14:01:43.581442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.195 [2024-12-05 14:01:43.581637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.195 [2024-12-05 14:01:43.581657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.195 [2024-12-05 14:01:43.581670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.195 [2024-12-05 14:01:43.581683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.195 [2024-12-05 14:01:43.593893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.195 [2024-12-05 14:01:43.594248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.195 [2024-12-05 14:01:43.594297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.195 [2024-12-05 14:01:43.594314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.195 [2024-12-05 14:01:43.594561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.195 [2024-12-05 14:01:43.594789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.195 [2024-12-05 14:01:43.594808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.195 [2024-12-05 14:01:43.594821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.195 [2024-12-05 14:01:43.594833] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.195 [2024-12-05 14:01:43.607139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.195 [2024-12-05 14:01:43.607487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.195 [2024-12-05 14:01:43.607517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.195 [2024-12-05 14:01:43.607534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.195 [2024-12-05 14:01:43.607779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.195 [2024-12-05 14:01:43.607969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.195 [2024-12-05 14:01:43.607992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.195 [2024-12-05 14:01:43.608006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.195 [2024-12-05 14:01:43.608019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.195 [2024-12-05 14:01:43.620285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.195 [2024-12-05 14:01:43.620655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.195 [2024-12-05 14:01:43.620683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.195 [2024-12-05 14:01:43.620707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.195 [2024-12-05 14:01:43.620942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.195 [2024-12-05 14:01:43.621147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.195 [2024-12-05 14:01:43.621166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.195 [2024-12-05 14:01:43.621179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.195 [2024-12-05 14:01:43.621191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.195 [2024-12-05 14:01:43.633898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.195 [2024-12-05 14:01:43.634321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.195 [2024-12-05 14:01:43.634375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.195 [2024-12-05 14:01:43.634401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.195 [2024-12-05 14:01:43.634671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.195 [2024-12-05 14:01:43.634934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.195 [2024-12-05 14:01:43.634954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.195 [2024-12-05 14:01:43.634967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.195 [2024-12-05 14:01:43.634979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.195 [2024-12-05 14:01:43.647285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.195 [2024-12-05 14:01:43.647622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.195 [2024-12-05 14:01:43.647652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.195 [2024-12-05 14:01:43.647669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.196 [2024-12-05 14:01:43.647932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.196 [2024-12-05 14:01:43.648121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.196 [2024-12-05 14:01:43.648140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.196 [2024-12-05 14:01:43.648153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.196 [2024-12-05 14:01:43.648170] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.196 [2024-12-05 14:01:43.660525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.196 [2024-12-05 14:01:43.660955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.196 [2024-12-05 14:01:43.661004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.196 [2024-12-05 14:01:43.661021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.196 [2024-12-05 14:01:43.661270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.196 [2024-12-05 14:01:43.661505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.196 [2024-12-05 14:01:43.661526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.196 [2024-12-05 14:01:43.661540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.196 [2024-12-05 14:01:43.661553] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.196 [2024-12-05 14:01:43.673836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.196 [2024-12-05 14:01:43.674179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.196 [2024-12-05 14:01:43.674207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.196 [2024-12-05 14:01:43.674224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.196 [2024-12-05 14:01:43.674471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.196 [2024-12-05 14:01:43.674672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.196 [2024-12-05 14:01:43.674691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.196 [2024-12-05 14:01:43.674714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.196 [2024-12-05 14:01:43.674742] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.196 [2024-12-05 14:01:43.687008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.196 [2024-12-05 14:01:43.687321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.196 [2024-12-05 14:01:43.687349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.196 [2024-12-05 14:01:43.687366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.196 [2024-12-05 14:01:43.687614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.196 [2024-12-05 14:01:43.687844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.196 [2024-12-05 14:01:43.687863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.196 [2024-12-05 14:01:43.687876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.196 [2024-12-05 14:01:43.687889] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2352519 Killed "${NVMF_APP[@]}" "$@" 00:30:12.196 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:12.196 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:12.196 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:12.196 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:12.196 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.196 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2353473 00:30:12.196 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:12.196 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2353473 00:30:12.196 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2353473 ']' 00:30:12.196 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.196 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.196 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.196 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.196 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.196 [2024-12-05 14:01:43.700431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.196 [2024-12-05 14:01:43.700855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.196 [2024-12-05 14:01:43.700903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.196 [2024-12-05 14:01:43.700921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.196 [2024-12-05 14:01:43.701172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.196 [2024-12-05 14:01:43.701366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.196 [2024-12-05 14:01:43.701385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.196 [2024-12-05 14:01:43.701399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.196 [2024-12-05 14:01:43.701439] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.196 [2024-12-05 14:01:43.713917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.196 [2024-12-05 14:01:43.714271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.196 [2024-12-05 14:01:43.714300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.196 [2024-12-05 14:01:43.714317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.196 [2024-12-05 14:01:43.714561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.196 [2024-12-05 14:01:43.714785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.196 [2024-12-05 14:01:43.714804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.196 [2024-12-05 14:01:43.714817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.196 [2024-12-05 14:01:43.714830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.458 [2024-12-05 14:01:43.727424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.458 [2024-12-05 14:01:43.727857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.458 [2024-12-05 14:01:43.727883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.458 [2024-12-05 14:01:43.727899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.458 [2024-12-05 14:01:43.728149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.458 [2024-12-05 14:01:43.728344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.458 [2024-12-05 14:01:43.728363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.458 [2024-12-05 14:01:43.728377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.458 [2024-12-05 14:01:43.728390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.458 [2024-12-05 14:01:43.740769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.458 [2024-12-05 14:01:43.741196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.458 [2024-12-05 14:01:43.741232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.458 [2024-12-05 14:01:43.741249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.458 [2024-12-05 14:01:43.741474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.458 [2024-12-05 14:01:43.741718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.458 [2024-12-05 14:01:43.741748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.458 [2024-12-05 14:01:43.741773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.458 [2024-12-05 14:01:43.741795] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.458 [2024-12-05 14:01:43.745986] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:30:12.458 [2024-12-05 14:01:43.746044] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.458 [2024-12-05 14:01:43.754886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.458 [2024-12-05 14:01:43.755254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.458 [2024-12-05 14:01:43.755295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.458 [2024-12-05 14:01:43.755312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.458 [2024-12-05 14:01:43.755582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.458 [2024-12-05 14:01:43.755818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.458 [2024-12-05 14:01:43.755838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.458 [2024-12-05 14:01:43.755852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.458 [2024-12-05 14:01:43.755864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.458 [2024-12-05 14:01:43.768300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.458 [2024-12-05 14:01:43.768707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.458 [2024-12-05 14:01:43.768752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.458 [2024-12-05 14:01:43.768769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.458 [2024-12-05 14:01:43.769021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.458 [2024-12-05 14:01:43.769217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.458 [2024-12-05 14:01:43.769236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.458 [2024-12-05 14:01:43.769249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.458 [2024-12-05 14:01:43.769262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.458 [2024-12-05 14:01:43.781693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.458 [2024-12-05 14:01:43.782070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.458 [2024-12-05 14:01:43.782099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.458 [2024-12-05 14:01:43.782117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.458 [2024-12-05 14:01:43.782361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.458 [2024-12-05 14:01:43.782616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.458 [2024-12-05 14:01:43.782638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.458 [2024-12-05 14:01:43.782652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.458 [2024-12-05 14:01:43.782667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.458 [2024-12-05 14:01:43.795063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.458 [2024-12-05 14:01:43.795356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.458 [2024-12-05 14:01:43.795398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.458 [2024-12-05 14:01:43.795415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.458 [2024-12-05 14:01:43.795670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.458 [2024-12-05 14:01:43.795901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.458 [2024-12-05 14:01:43.795920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.458 [2024-12-05 14:01:43.795933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.458 [2024-12-05 14:01:43.795946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.458 [2024-12-05 14:01:43.808314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.458 [2024-12-05 14:01:43.808738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.458 [2024-12-05 14:01:43.808786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.458 [2024-12-05 14:01:43.808810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.458 [2024-12-05 14:01:43.809047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.458 [2024-12-05 14:01:43.809258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.458 [2024-12-05 14:01:43.809278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.458 [2024-12-05 14:01:43.809291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.458 [2024-12-05 14:01:43.809303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.458 [2024-12-05 14:01:43.820140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:12.458 [2024-12-05 14:01:43.821754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.458 [2024-12-05 14:01:43.822147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.458 [2024-12-05 14:01:43.822175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.458 [2024-12-05 14:01:43.822192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.458 [2024-12-05 14:01:43.822449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.458 [2024-12-05 14:01:43.822665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.458 [2024-12-05 14:01:43.822685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.458 [2024-12-05 14:01:43.822699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.458 [2024-12-05 14:01:43.822712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.458 [2024-12-05 14:01:43.835068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.458 [2024-12-05 14:01:43.835617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.459 [2024-12-05 14:01:43.835664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.459 [2024-12-05 14:01:43.835685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.459 [2024-12-05 14:01:43.835950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.459 [2024-12-05 14:01:43.836148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.459 [2024-12-05 14:01:43.836168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.459 [2024-12-05 14:01:43.836184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.459 [2024-12-05 14:01:43.836199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.459 [2024-12-05 14:01:43.848388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.459 [2024-12-05 14:01:43.848816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.459 [2024-12-05 14:01:43.848845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.459 [2024-12-05 14:01:43.848862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.459 [2024-12-05 14:01:43.849130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.459 [2024-12-05 14:01:43.849325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.459 [2024-12-05 14:01:43.849345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.459 [2024-12-05 14:01:43.849358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.459 [2024-12-05 14:01:43.849371] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.459 [2024-12-05 14:01:43.861714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.459 [2024-12-05 14:01:43.862085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.459 [2024-12-05 14:01:43.862115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.459 [2024-12-05 14:01:43.862133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.459 [2024-12-05 14:01:43.862374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.459 [2024-12-05 14:01:43.862615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.459 [2024-12-05 14:01:43.862636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.459 [2024-12-05 14:01:43.862651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.459 [2024-12-05 14:01:43.862664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.459 [2024-12-05 14:01:43.874891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.459 [2024-12-05 14:01:43.874922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.459 [2024-12-05 14:01:43.874949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.459 [2024-12-05 14:01:43.874961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.459 [2024-12-05 14:01:43.874971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.459 [2024-12-05 14:01:43.875084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.459 [2024-12-05 14:01:43.875409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.459 [2024-12-05 14:01:43.875468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.459 [2024-12-05 14:01:43.875486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.459 [2024-12-05 14:01:43.875734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.459 [2024-12-05 14:01:43.875944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.459 [2024-12-05 14:01:43.875964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.459 [2024-12-05 14:01:43.875977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.459 [2024-12-05 14:01:43.875989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.459 [2024-12-05 14:01:43.876496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.459 [2024-12-05 14:01:43.876525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:12.459 [2024-12-05 14:01:43.876528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.459 [2024-12-05 14:01:43.888625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.459 [2024-12-05 14:01:43.889125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.459 [2024-12-05 14:01:43.889163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.459 [2024-12-05 14:01:43.889192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.459 [2024-12-05 14:01:43.889462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.459 [2024-12-05 14:01:43.889704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.459 [2024-12-05 14:01:43.889726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.459 [2024-12-05 14:01:43.889743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.459 [2024-12-05 14:01:43.889759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.459 [2024-12-05 14:01:43.902196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.459 [2024-12-05 14:01:43.902716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.459 [2024-12-05 14:01:43.902766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.459 [2024-12-05 14:01:43.902786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.459 [2024-12-05 14:01:43.903043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.459 [2024-12-05 14:01:43.903255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.459 [2024-12-05 14:01:43.903275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.459 [2024-12-05 14:01:43.903292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.459 [2024-12-05 14:01:43.903308] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.459 [2024-12-05 14:01:43.915781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.459 [2024-12-05 14:01:43.916262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.459 [2024-12-05 14:01:43.916316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.459 [2024-12-05 14:01:43.916337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.459 [2024-12-05 14:01:43.916611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.459 [2024-12-05 14:01:43.916841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.459 [2024-12-05 14:01:43.916862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.459 [2024-12-05 14:01:43.916878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.459 [2024-12-05 14:01:43.916895] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.459 [2024-12-05 14:01:43.929465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.459 [2024-12-05 14:01:43.929964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.459 [2024-12-05 14:01:43.930018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.459 [2024-12-05 14:01:43.930039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.459 [2024-12-05 14:01:43.930291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.459 [2024-12-05 14:01:43.930533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.459 [2024-12-05 14:01:43.930556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.459 [2024-12-05 14:01:43.930573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.459 [2024-12-05 14:01:43.930589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.459 [2024-12-05 14:01:43.943006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.459 [2024-12-05 14:01:43.943584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.459 [2024-12-05 14:01:43.943626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.459 [2024-12-05 14:01:43.943647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.459 [2024-12-05 14:01:43.943902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.459 [2024-12-05 14:01:43.944113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.459 [2024-12-05 14:01:43.944135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.459 [2024-12-05 14:01:43.944151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.459 [2024-12-05 14:01:43.944168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.459 [2024-12-05 14:01:43.956608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.459 [2024-12-05 14:01:43.957199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.459 [2024-12-05 14:01:43.957237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.460 [2024-12-05 14:01:43.957257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.460 [2024-12-05 14:01:43.957492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.460 [2024-12-05 14:01:43.957730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.460 [2024-12-05 14:01:43.957753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.460 [2024-12-05 14:01:43.957771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.460 [2024-12-05 14:01:43.957803] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.460 [2024-12-05 14:01:43.970168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.460 [2024-12-05 14:01:43.970560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.460 [2024-12-05 14:01:43.970591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.460 [2024-12-05 14:01:43.970608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.460 [2024-12-05 14:01:43.970863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.460 [2024-12-05 14:01:43.971072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.460 [2024-12-05 14:01:43.971093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.460 [2024-12-05 14:01:43.971107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.460 [2024-12-05 14:01:43.971121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.722 [2024-12-05 14:01:43.983858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.722 [2024-12-05 14:01:43.984208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.722 [2024-12-05 14:01:43.984238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.722 [2024-12-05 14:01:43.984256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.722 [2024-12-05 14:01:43.984484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.722 [2024-12-05 14:01:43.984705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.722 [2024-12-05 14:01:43.984743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.722 [2024-12-05 14:01:43.984757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.722 [2024-12-05 14:01:43.984772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.722 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.722 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:12.722 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:12.722 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:12.722 14:01:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.722 [2024-12-05 14:01:43.997535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.722 [2024-12-05 14:01:43.997900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.722 [2024-12-05 14:01:43.997938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.722 [2024-12-05 14:01:43.997965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.722 [2024-12-05 14:01:43.998246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.722 [2024-12-05 14:01:43.998550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.722 [2024-12-05 14:01:43.998583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.722 [2024-12-05 14:01:43.998607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.722 [2024-12-05 14:01:43.998632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.722 [2024-12-05 14:01:44.011935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.722 [2024-12-05 14:01:44.012334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.722 [2024-12-05 14:01:44.012366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.722 [2024-12-05 14:01:44.012390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.722 [2024-12-05 14:01:44.012619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.722 [2024-12-05 14:01:44.012865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.722 [2024-12-05 14:01:44.012887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.722 [2024-12-05 14:01:44.012901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.722 [2024-12-05 14:01:44.012914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.722 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.722 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:12.722 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.722 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.722 [2024-12-05 14:01:44.025606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.722 [2024-12-05 14:01:44.025651] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.722 [2024-12-05 14:01:44.025953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.722 [2024-12-05 14:01:44.025984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.722 [2024-12-05 14:01:44.026002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.722 [2024-12-05 14:01:44.026235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.722 [2024-12-05 14:01:44.026477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.722 [2024-12-05 14:01:44.026500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.722 [2024-12-05 14:01:44.026515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.722 [2024-12-05 14:01:44.026529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.722 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.722 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:12.722 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.722 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.722 [2024-12-05 14:01:44.039260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.722 [2024-12-05 14:01:44.039635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.722 [2024-12-05 14:01:44.039665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.722 [2024-12-05 14:01:44.039684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.722 [2024-12-05 14:01:44.039947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.722 [2024-12-05 14:01:44.040158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.722 [2024-12-05 14:01:44.040180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.722 [2024-12-05 14:01:44.040195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.722 [2024-12-05 14:01:44.040217] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.722 [2024-12-05 14:01:44.052899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.722 [2024-12-05 14:01:44.053244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.722 [2024-12-05 14:01:44.053274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.722 [2024-12-05 14:01:44.053291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.722 [2024-12-05 14:01:44.053531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.722 [2024-12-05 14:01:44.053781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.722 [2024-12-05 14:01:44.053802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.722 [2024-12-05 14:01:44.053816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.722 [2024-12-05 14:01:44.053831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.722 [2024-12-05 14:01:44.066385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.722 [2024-12-05 14:01:44.066788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.722 [2024-12-05 14:01:44.066820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.722 [2024-12-05 14:01:44.066837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.722 [2024-12-05 14:01:44.067084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.722 [2024-12-05 14:01:44.067292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.722 [2024-12-05 14:01:44.067314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.722 [2024-12-05 14:01:44.067328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.722 [2024-12-05 14:01:44.067343] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.722 Malloc0 00:30:12.722 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.722 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:12.722 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.722 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.722 [2024-12-05 14:01:44.079963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.722 [2024-12-05 14:01:44.080348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.723 [2024-12-05 14:01:44.080379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.723 [2024-12-05 14:01:44.080397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.723 [2024-12-05 14:01:44.080623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.723 [2024-12-05 14:01:44.080870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.723 [2024-12-05 14:01:44.080892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.723 [2024-12-05 14:01:44.080915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.723 [2024-12-05 14:01:44.080931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.723 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.723 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:12.723 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.723 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.723 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.723 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.723 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.723 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.723 [2024-12-05 14:01:44.093678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.723 [2024-12-05 14:01:44.094115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.723 [2024-12-05 14:01:44.094146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139fa50 with addr=10.0.0.2, port=4420 00:30:12.723 [2024-12-05 14:01:44.094163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fa50 is same with the state(6) to be set 00:30:12.723 [2024-12-05 14:01:44.094397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa50 (9): Bad file descriptor 00:30:12.723 [2024-12-05 14:01:44.094640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.723 [2024-12-05 14:01:44.094664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.723 [2024-12-05 14:01:44.094680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.723 [2024-12-05 14:01:44.094709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.723 [2024-12-05 14:01:44.095085] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.723 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.723 14:01:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2352702 00:30:12.723 [2024-12-05 14:01:44.107230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.723 [2024-12-05 14:01:44.136189] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:14.098 3654.50 IOPS, 14.28 MiB/s [2024-12-05T13:01:46.565Z] 4339.43 IOPS, 16.95 MiB/s [2024-12-05T13:01:47.497Z] 4859.38 IOPS, 18.98 MiB/s [2024-12-05T13:01:48.433Z] 5264.56 IOPS, 20.56 MiB/s [2024-12-05T13:01:49.366Z] 5596.50 IOPS, 21.86 MiB/s [2024-12-05T13:01:50.308Z] 5860.09 IOPS, 22.89 MiB/s [2024-12-05T13:01:51.276Z] 6080.00 IOPS, 23.75 MiB/s [2024-12-05T13:01:52.216Z] 6272.23 IOPS, 24.50 MiB/s [2024-12-05T13:01:53.595Z] 6429.86 IOPS, 25.12 MiB/s [2024-12-05T13:01:53.595Z] 6567.27 IOPS, 25.65 MiB/s 00:30:22.069 Latency(us) 00:30:22.069 [2024-12-05T13:01:53.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.069 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:22.069 Verification LBA range: start 0x0 length 0x4000 00:30:22.069 Nvme1n1 : 15.01 6570.96 25.67 9939.14 0.00 7729.26 585.58 21845.33 00:30:22.069 [2024-12-05T13:01:53.595Z] =================================================================================================================== 00:30:22.069 [2024-12-05T13:01:53.595Z] Total : 6570.96 25.67 9939.14 0.00 7729.26 585.58 21845.33 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:22.069 rmmod nvme_tcp 00:30:22.069 rmmod nvme_fabrics 00:30:22.069 rmmod nvme_keyring 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2353473 ']' 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2353473 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2353473 ']' 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2353473 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2353473 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2353473' 00:30:22.069 killing process with pid 2353473 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2353473 00:30:22.069 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2353473 00:30:22.328 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:22.328 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:22.328 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:22.328 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:22.328 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:22.328 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:22.328 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:22.328 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:22.328 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:22.328 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.328 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:22.328 14:01:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.872 14:01:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:24.872 00:30:24.872 real 0m22.768s 00:30:24.872 user 1m0.706s 00:30:24.872 sys 0m4.287s 00:30:24.872 14:01:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.872 14:01:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.872 ************************************ 00:30:24.872 END TEST nvmf_bdevperf 00:30:24.872 ************************************ 00:30:24.872 14:01:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:24.872 14:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:24.872 14:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.872 14:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.872 ************************************ 00:30:24.872 START TEST nvmf_target_disconnect 00:30:24.872 ************************************ 00:30:24.872 14:01:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:24.872 * Looking for test storage... 00:30:24.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:24.872 14:01:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:24.872 14:01:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:30:24.872 14:01:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:24.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.872 --rc genhtml_branch_coverage=1 00:30:24.872 --rc genhtml_function_coverage=1 00:30:24.872 --rc genhtml_legend=1 00:30:24.872 --rc geninfo_all_blocks=1 00:30:24.872 --rc geninfo_unexecuted_blocks=1 00:30:24.872 00:30:24.872 ' 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:24.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.872 --rc genhtml_branch_coverage=1 00:30:24.872 --rc genhtml_function_coverage=1 00:30:24.872 --rc genhtml_legend=1 00:30:24.872 --rc geninfo_all_blocks=1 00:30:24.872 --rc geninfo_unexecuted_blocks=1 00:30:24.872 00:30:24.872 ' 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:24.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.872 --rc genhtml_branch_coverage=1 00:30:24.872 --rc genhtml_function_coverage=1 00:30:24.872 --rc genhtml_legend=1 00:30:24.872 --rc geninfo_all_blocks=1 00:30:24.872 --rc geninfo_unexecuted_blocks=1 00:30:24.872 00:30:24.872 ' 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:24.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.872 --rc genhtml_branch_coverage=1 00:30:24.872 --rc genhtml_function_coverage=1 00:30:24.872 --rc genhtml_legend=1 00:30:24.872 --rc geninfo_all_blocks=1 00:30:24.872 --rc geninfo_unexecuted_blocks=1 00:30:24.872 00:30:24.872 ' 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.872 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:24.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:24.873 14:01:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:26.779 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:26.780 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:26.780 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:26.780 Found net devices under 0000:09:00.0: cvl_0_0 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:26.780 Found net devices under 0000:09:00.1: cvl_0_1 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:26.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:30:26.780 00:30:26.780 --- 10.0.0.2 ping statistics --- 00:30:26.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.780 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:30:26.780 00:30:26.780 --- 10.0.0.1 ping statistics --- 00:30:26.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.780 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.780 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:27.039 ************************************ 00:30:27.040 START TEST nvmf_target_disconnect_tc1 00:30:27.040 ************************************ 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:27.040 [2024-12-05 14:01:58.404225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.040 [2024-12-05 14:01:58.404301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb83f40 with addr=10.0.0.2, port=4420 00:30:27.040 [2024-12-05 14:01:58.404339] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:27.040 [2024-12-05 14:01:58.404358] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:27.040 [2024-12-05 14:01:58.404372] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:27.040 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:27.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:27.040 Initializing NVMe Controllers 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:27.040 00:30:27.040 real 0m0.096s 00:30:27.040 user 0m0.046s 00:30:27.040 sys 0m0.050s 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:27.040 ************************************ 00:30:27.040 END TEST nvmf_target_disconnect_tc1 00:30:27.040 ************************************ 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:27.040 ************************************ 00:30:27.040 START TEST nvmf_target_disconnect_tc2 00:30:27.040 ************************************ 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2356635 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2356635 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2356635 ']' 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:27.040 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.040 [2024-12-05 14:01:58.517997] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:30:27.040 [2024-12-05 14:01:58.518085] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.299 [2024-12-05 14:01:58.608041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:27.299 [2024-12-05 14:01:58.682650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.299 [2024-12-05 14:01:58.682723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.299 [2024-12-05 14:01:58.682750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.299 [2024-12-05 14:01:58.682786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.299 [2024-12-05 14:01:58.682805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.299 [2024-12-05 14:01:58.684832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:27.299 [2024-12-05 14:01:58.684906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:27.299 [2024-12-05 14:01:58.684967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:27.299 [2024-12-05 14:01:58.684977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.559 Malloc0 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.559 [2024-12-05 14:01:58.946754] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.559 [2024-12-05 14:01:58.975005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2356670 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:27.559 14:01:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:30.130 14:02:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2356635 00:30:30.130 14:02:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Write completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Write completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Write completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Write completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Write completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Write completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Write completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Write completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Write completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Write completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Write completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Write completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 [2024-12-05 14:02:00.998819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.130 starting I/O failed 00:30:30.130 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 [2024-12-05 14:02:00.999133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 [2024-12-05 14:02:00.999465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Read completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 Write completed with error (sct=0, sc=8) 00:30:30.131 starting I/O failed 00:30:30.131 [2024-12-05 14:02:00.999805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.131 [2024-12-05 14:02:00.999974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.131 [2024-12-05 14:02:01.000023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.131 qpair failed and we were unable to recover it. 00:30:30.131 [2024-12-05 14:02:01.000122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.131 [2024-12-05 14:02:01.000149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.131 qpair failed and we were unable to recover it. 00:30:30.131 [2024-12-05 14:02:01.000243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.131 [2024-12-05 14:02:01.000271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.131 qpair failed and we were unable to recover it. 00:30:30.131 [2024-12-05 14:02:01.000397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.131 [2024-12-05 14:02:01.000433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.131 qpair failed and we were unable to recover it. 00:30:30.131 [2024-12-05 14:02:01.000536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.131 [2024-12-05 14:02:01.000562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.131 qpair failed and we were unable to recover it. 00:30:30.131 [2024-12-05 14:02:01.000652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.131 [2024-12-05 14:02:01.000678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.000796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.000822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.000958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.000998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.001106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.001134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.001232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.001258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.001351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.001377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.001485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.001512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.001630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.001657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.001789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.001815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.001910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.001936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.002058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.002083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.002168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.002194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.002299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.002325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.002436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.002463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.002549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.002576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.002660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.002686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.002774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.002800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.002908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.002933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.003012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.003038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.003121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.003147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.003230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.003255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.003364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.003390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.003483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.003509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.003601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.003627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.003751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.003777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.003868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.003893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.004009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.004035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.004147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.004177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.004269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.004295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.004379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.004405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.004497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.004523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.004663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.004720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.004852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.004882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.004974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.005001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.005116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.005143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.005226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.005252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.005369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.005398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.005522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.005549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.132 [2024-12-05 14:02:01.005640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.132 [2024-12-05 14:02:01.005667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.005748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.005774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.005885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.005911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.006039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.006088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.006237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.006264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.006345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.006371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.006477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.006503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.006587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.006613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.006696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.006722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.006842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.006867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.006956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.006985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.007102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.007131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.007263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.007290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.007409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.007443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.008562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.008590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.008678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.008705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.008787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.008813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.008903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.008930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.009008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.009034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.009164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.009203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.009307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.009346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.009486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.009524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.009611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.009639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.009775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.009801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.009938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.009964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.010090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.010118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.010217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.010246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.010330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.010356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.010457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.010484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.010578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.010604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.010687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.010713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.010865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.010918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.011048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.011099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.011259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.011297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.011391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.011433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.011536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.011562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.011650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.011675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.011789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.011815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.011928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-05 14:02:01.011992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-05 14:02:01.012075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.012102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.012241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.012267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.012392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.012441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.012544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.012574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.012669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.012696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.012813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.012839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.012950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.012977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.013079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.013118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.013257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.013296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.013437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.013465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.013578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.013604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.013725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.013751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.013840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.013866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.014010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.014036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.014239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.014269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.014356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.014383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.014476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.014503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.014593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.014619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.014700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.014725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.014821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.014846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.014955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.014980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.015118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.015144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.015275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.015315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.015428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.015456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.015563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.015601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.015690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.015717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.015830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.015861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.015979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.016005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.016101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.016129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.016268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.016295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.016384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.016412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.016538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.016565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.016646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.016673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.016762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.016789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.016927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.016954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.017095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.017121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.017204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.017230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-05 14:02:01.017309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-05 14:02:01.017336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.017440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.017479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.017575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.017603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.017756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.017784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.017940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.017994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.018147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.018173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.018285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.018310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.018395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.018427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.018521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.018548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.018639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.018666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.018767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.018793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.018878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.018904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.019015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.019040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.019128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.019153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.019268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.019294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.019438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.019465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.019545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.019571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.019673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.019699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.019818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.019845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.019960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.019985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.020071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.020097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.020174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.020200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.020310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.020336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.020457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.020496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.020617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.020646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.020768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.020796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.020884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.020909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.021029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.021055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.021142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.021169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.021262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.021293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.021437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.021463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.021558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-05 14:02:01.021597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-05 14:02:01.021717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.021744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.021866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.021892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.021982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.022009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.022095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.022120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.022227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.022253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.022358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.022384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.022472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.022499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.022608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.022634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.022748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.022774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.022881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.022907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.022986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.023012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.023154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.023180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.023311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.023350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.023465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.023494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.023613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.023640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.023756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.023783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.023928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.023955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.024066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.024093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.024181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.024208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.024297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.024324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.024409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.024444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.024559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.024585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.024703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.024728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.024823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.024848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.024964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.024990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.025071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.025096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.025209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.025234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.025313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.025339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.025450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.025476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.025585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.025611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.025691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.025716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.025795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.025820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.025913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.025938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.026019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.026046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.026145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.026185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.026310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.026348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.026477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.026506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.026612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-05 14:02:01.026639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-05 14:02:01.026792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.026818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.026907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.026933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.027022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.027050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.027164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.027193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.027276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.027304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.027428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.027455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.027536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.027563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.027677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.027703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.027819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.027846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.027954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.027981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.028069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.028097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.028183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.028211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.028288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.028313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.028449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.028475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.028590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.028616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.028777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.028821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.029012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.029039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.029151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.029179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.029294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.029322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.029407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.029439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.029556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.029582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.029668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.029694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.029785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.029811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.029921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.029947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.030031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.030057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.030170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.030195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.030307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.030339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.030441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.030468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.030551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.030579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.030683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.030709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.030816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.030842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.030954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.030980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.031088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.031115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.031226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.031265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.031408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.031440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.031553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.031579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.031699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.031725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.031814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.031839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-05 14:02:01.031957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-05 14:02:01.031982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.032072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.032099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.032199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.032239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.032360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.032389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.032520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.032547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.032638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.032664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.032742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.032768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.032845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.032870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.033082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.033138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.033218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.033244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.033345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.033385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.033488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.033516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.033634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.033661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.033776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.033803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.033889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.033915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.034056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.034088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.034225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.034252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.034349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.034376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.034468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.034495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.034576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.034601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.034683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.034708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.034827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.034854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.034944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.034970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.035091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.035117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.035212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.035250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.035378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.035424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.035522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.035551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.035641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.035669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.035761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.035788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.035909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.035937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.036052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.036079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.036176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.036202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.036282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.036309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.036428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.036463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.036575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.036600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.036728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.036767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.036916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.036944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.037065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.037091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-05 14:02:01.037209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-05 14:02:01.037235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.037315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.037341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.037449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.037478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.037586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.037612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.037739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.037769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.037908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.037934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.038063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.038091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.038173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.038200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.038281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.038307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.038399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.038432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.038541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.038569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.038678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.038705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.038786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.038813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.038922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.038948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.039029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.039055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.039183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.039211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.039296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.039321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.039410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.039455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.039540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.039566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.039675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.039700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.039872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.039931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.040110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.040163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.040285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.040313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.040400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.040435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.040528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.040554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.040636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.040664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.040762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.040788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.040871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.040898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.041013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.041039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.041120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.041146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.041231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.041259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.041381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.041408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-05 14:02:01.041511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-05 14:02:01.041537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.041650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.041676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.041773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.041798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.041909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.041935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.042043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.042069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.042171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.042210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.042335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.042363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.042493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.042522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.042614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.042640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.042738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.042765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.042851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.042879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.043002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.043029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.043191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.043249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.043358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.043383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.043485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.043511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.043601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.043626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.043706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.043732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.043847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.043873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.043959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.043984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.044121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.044147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.044260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.044287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.044409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.044442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.044558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.044584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.044662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.044688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.044831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.044858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.044972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.044997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.045095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.045122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.045253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.045292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.045493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.045523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.045715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.045741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.045842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.045868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.046008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.046034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.046113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.046139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.046256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.046283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-05 14:02:01.046381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-05 14:02:01.046427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.046530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.046558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.046637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.046664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.046801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.046854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.047008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.047056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.047230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.047286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.047371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.047397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.047505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.047530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.047633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.047658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.047733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.047759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.047852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.047877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.048020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.048045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.048137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.048165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.048247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.048273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.048359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.048385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.048503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.048529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.048646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.048673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.048789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.048814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.048924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.048950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.049038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.049064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.049181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.049209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.049339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.049367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.049456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.049484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.049571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.049596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.049679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.049705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.049855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.049908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.050052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.050103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.050188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.050215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.050295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.050322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.050440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.050475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.050553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.050579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.050678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.050717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.050842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.050870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.050962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-05 14:02:01.050988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-05 14:02:01.051100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.051126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.051212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.051240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.051387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.051414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.051517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.051544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.051629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.051656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.051762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.051789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.051871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.051898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.052021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.052048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.052147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.052186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.052301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.052327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.052402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.052436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.052560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.052591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.052712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.052739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.052877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.052904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.052992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.053018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.053132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.053157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.053239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.053265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.053360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.053398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.053518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.053555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.053646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.053674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.053841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.053896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.054003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.054028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.054116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.054143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.054227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.054255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.054364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.054403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.054512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.054541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.054628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.054655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.054795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.054850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.054931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.054957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.055067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.055094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.055233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.055259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.055348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.055375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.055505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.055532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.055620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.055645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.055790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.055816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.055924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.055950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.056045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.056084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.056178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-05 14:02:01.056207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-05 14:02:01.056302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.056329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.056424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.056451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.056542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.056568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.056679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.056705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.056779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.056805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.056918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.056944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.057030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.057055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.057139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.057166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.057280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.057306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.057385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.057411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.057503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.057530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.057634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.057660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.057778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.057803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.057941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.057968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.058062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.058088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.058169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.058195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.058284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.058311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.058470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.058510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.058634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.058662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.058750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.058778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.058862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.058889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.058970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.058997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.059111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.059138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.059246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.059273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.059389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.059414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.059517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.059544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.059659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.059686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.059769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.059795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.059882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.059909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.060024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.060051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.060147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.060185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.060316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.060355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.060447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.060475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.060584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-05 14:02:01.060610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-05 14:02:01.060688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.060715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.060818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.060876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.060951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.060977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.061101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.061126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.061210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.061235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.061342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.061368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.061483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.061515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.061606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.061631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.061719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.061746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.061855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.061881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.061969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.061996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.062108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.062134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.062239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.062265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.062347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.062376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.062479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.062506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.062599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.062625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.062737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.062764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.062872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.062898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.063013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.063044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.063164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.063191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.063297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.063336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.063460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.063489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.063606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.063633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.063741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.063767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.063852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.063878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.064014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.064040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.064121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.064147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.064226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.064252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.064349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.064388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.064510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.064538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.064635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.064661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.064747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.064772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.064849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.064874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-05 14:02:01.064960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-05 14:02:01.064990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.065086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.065113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.065223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.065249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.065333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.065359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.065446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.065473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.065611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.065636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.065712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.065737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.065822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.065848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.065961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.065987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.066068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.066096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.066209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.066235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.066354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.066385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.066537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.066565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.066679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.066705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.066805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.066833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.066920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.066945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.067032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.067057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.067168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.067193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.067307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.067337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.067457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.067486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.067599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.067624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.067711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.067736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.067817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.067844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.067983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.068009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.068095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.068122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.068210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.068237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.068350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.068376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.068470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.068500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.068585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.068611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.068699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.068725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.068812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.068838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.069036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.069091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.069231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.069256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-05 14:02:01.069397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-05 14:02:01.069431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.069557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.069583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.069675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.069701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.069781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.069807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.069911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.069936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.070024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.070050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.070132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.070158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.070245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.070275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.070390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.070421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.070507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.070533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.070638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.070664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.070749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.070774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.070891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.070917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.070994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.071019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.071145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.071170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.071248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.071273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.071376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.071401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.071546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.071572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.071654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.071679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.071811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.071836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.071949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.071975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.072061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.072091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.072211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.072249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.072371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.072399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.072507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.072534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.072646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.072673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.072807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.072859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.072949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.072975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.073095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.073121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.073226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.073252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.073338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.073365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.073457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.073485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.073602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.073627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.073712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.073740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.073855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.073886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.073972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.073999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.074114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.074141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.074236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.074263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-05 14:02:01.074351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-05 14:02:01.074376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.074461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.074488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.074563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.074588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.074724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.074750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.074837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.074864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.074954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.074982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.075079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.075105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.075194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.075220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.075361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.075386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.075484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.075511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.075593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.075619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.075738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.075764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.075872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.075898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.076013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.076038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.076125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.076152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.076286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.076325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.076427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.076455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.076568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.076594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.076686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.076711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.076846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.076871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.076949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.076975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.077068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.077095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.077199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.077239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.077359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.077393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.077541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.077567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.077683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.077709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.077848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.077874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.077954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.077979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.078092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.078154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.078262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.078302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.078469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.078509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.078635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.078663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.078781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.078808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.078889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.078916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.079009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.079037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.079130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.079155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.079292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.079318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.079438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.079464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.079549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-05 14:02:01.079575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-05 14:02:01.079712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.079737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.079852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.079878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.079958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.079984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.080100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.080126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.080210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.080236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.080321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.080347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.080461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.080488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.080602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.080627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.080701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.080726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.080840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.080866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.080945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.080971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.081061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.081090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.081181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.081207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.081284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.081309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.081406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.081451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.081585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.081625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.081743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.081770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.081963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.081988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.082103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.082129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.082245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.082271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.082359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.082385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.082486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.082512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.082631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.082657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.082789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.082847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.082989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.083036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.083147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.083173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.083275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.083301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.083508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.083548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.083647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.083678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.083773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.083801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.083944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.083971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.084053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.084079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.084172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.084199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.084280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.084307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.084388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.084415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.084536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.084562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.084648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.084674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.084790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-05 14:02:01.084816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-05 14:02:01.084925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.084964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.085055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.085083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.085197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.085223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.085313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.085339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.085426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.085453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.085574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.085600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.085684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.085710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.085823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.085851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.085945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.085971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.086115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.086142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.086258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.086283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.086422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.086448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.086562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.086588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.086699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.086725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.086811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.086837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.086954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.086982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.087134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.087173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.087294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.087322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.087445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.087473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.087585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.087611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.087695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.087721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.087829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.087854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.087933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.087959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.088076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.088102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.088195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.088221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.088308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.088335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.088449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.088476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.088563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.088591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.088671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.088696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.088805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.088831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.088913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.088939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.089048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.089074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-05 14:02:01.089154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-05 14:02:01.089180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.089288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.089315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.089395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.089428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.089511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.089536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.089676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.089701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.089789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.089815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.089955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.089981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.090127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.090179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.090284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.090329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.090461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.090501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.090620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.090647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.090739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.090766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.090860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.090887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.090984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.091010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.091094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.091120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.091266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.091291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.091402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.091434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.091545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.091571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.091665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.091691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.091827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.091853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.091959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.091984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.092108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.092147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.092248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.092277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.092392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.092427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.092538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.092564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.092680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.092706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.092792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.092818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.092899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.092926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.093019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.093046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.093187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.093214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.093333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.093360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.093446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.093472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.093558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.093584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.093664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.093690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.093763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.093788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.093909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.093935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.094053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.094079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.094216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.094241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.094324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-05 14:02:01.094350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-05 14:02:01.094466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.094495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.094624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.094663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.094784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.094811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.094927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.094952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.095044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.095070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.095179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.095218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.095344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.095372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.095499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.095527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.095604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.095631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.095739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.095770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.095936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.095963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.096090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.096117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.096237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.096266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.096394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.096432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.096516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.096542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.096616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.096642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.096733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.096759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.096867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.096892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.096970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.096995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.097087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.097112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.097266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.097305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.097426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.097455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.097601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.097628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.097746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.097772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.097884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.097911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.098021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.098048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.098163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.098189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.098324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.098362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.098486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.098514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.098603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.098629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.098752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.098779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.098895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.098921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.099034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.099060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.099142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.099168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.099276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1323f30 is same with the state(6) to be set 00:30:30.151 [2024-12-05 14:02:01.099400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.099434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.099551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.099578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.099665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.099693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-05 14:02:01.099778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-05 14:02:01.099805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.099891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.099918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.100033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.100059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.100146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.100173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.100276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.100303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.100398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.100431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.100545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.100571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.100685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.100711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.100799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.100826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.100963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.100989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.101111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.101137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.101214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.101240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.101349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.101389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.101522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.101562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.101661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.101688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.101792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.101819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.101899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.101925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.102015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.102043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.102180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.102221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.102313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.102352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.102449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.102477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.102595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.102621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.102766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.102792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.102873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.102898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.102976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.103001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.103092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.103119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.103241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.103269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.103363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.103391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.103488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.103515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.103597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.103623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.103734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.103761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.103866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.103920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.104014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.104039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.104112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.104138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.104213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.104240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.104383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.104408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.104500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.104528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.104612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.104638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.104726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-05 14:02:01.104752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-05 14:02:01.104873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.104899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.104985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.105012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.105114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.105158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.105286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.105314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.105433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.105460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.105572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.105598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.105735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.105786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.105870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.105896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.106039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.106089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.106216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.106243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.106381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.106408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.106531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.106559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.106648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.106675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.106785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.106816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.106904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.106932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.107079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.107105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.107193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.107221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.107374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.107413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.107538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.107565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.107650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.107675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.107764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.107790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.107903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.107929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.108024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.108052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.108190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.108229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.108342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.108369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.108492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.108519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.108608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.108634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.108725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.108751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.108846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.108872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.108963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.108989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.109072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.109098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.109215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.109240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.109347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.109372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.109459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.109491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.109588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.109615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.109703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.109729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.109841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.109867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.109974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-05 14:02:01.110001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-05 14:02:01.110106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.110133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.110215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.110242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.110360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.110391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.110492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.110521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.110661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.110688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.110833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.110876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.111025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.111074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.111161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.111187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.111305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.111333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.111448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.111476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.111593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.111620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.111701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.111727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.111844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.111870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.111946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.111973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.112051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.112078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.112182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.112220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.112334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.112374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.112483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.112511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.112593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.112619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.112732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.112757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.112899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.112944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.113030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.113056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.113142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.113168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.113279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.113304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.113426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.113452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.113529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.113554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.113692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.113717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.113803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.113829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.113942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.113971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.114124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.114163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.114288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.114316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.114429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.114457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.114602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.114629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.114715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.114742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-05 14:02:01.114821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-05 14:02:01.114847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.114960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.114986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.115070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.115096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.115180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.115208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.115289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.115314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.115403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.115444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.115643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.115670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.115791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.115819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.115914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.115944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.116032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.116060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.116179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.116205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.116286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.116314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.116394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.116427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.116505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.116531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.116642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.116668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.116783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.116810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.116930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.116957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.117068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.117096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.117248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.117276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.117379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.117424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.117527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.117555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.117674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.117701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.117791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.117817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.117934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.117961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.118043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.118070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.118209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.118235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.118326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.118352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.118442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.118469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.118549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.118575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.118657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.118681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.118770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.118795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.118905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.118929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.119035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.119060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.119166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.119191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.119283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.119309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.119389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.119430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.119520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.119546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.119624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.119648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-05 14:02:01.119785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-05 14:02:01.119811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.119927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.119953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.120063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.120089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.120207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.120232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.120351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.120376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.120469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.120495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.120572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.120597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.120684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.120711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.120823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.120849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.120962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.120988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.121071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.121096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.121198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.121237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.121359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.121387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.121484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.121523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.121646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.121674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.121783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.121810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.121892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.121919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.122052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.122102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.122178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.122204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.122288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.122316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.122462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.122489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.122597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.122623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.122730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.122756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.122850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.122875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.122960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.122991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.123085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.123112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.123194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.123220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.123328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.123354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.123492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.123518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.123607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.123633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.123751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.123777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.123890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.123915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.124029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.124054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.124134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.124160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.124239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.124265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.124395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.124442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.124574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.124614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.124761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.124788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.124928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-05 14:02:01.124976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-05 14:02:01.125120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.125166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.125285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.125310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.125428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.125456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.125572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.125598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.125687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.125712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.125821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.125847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.125964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.125990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.126080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.126106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.126224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.126249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.126373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.126412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.126508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.126537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.126619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.126645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.126759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.126789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.126867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.126893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.127035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.127060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.127143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.127170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.127299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.127329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.127450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.127479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.127608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.127635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.127722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.127748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.127893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.127942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.128023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.128049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.128132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.128157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.128238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.128264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.128381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.128408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.128558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.128584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.128679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.128706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.128821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.128848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.128957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.128982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.129095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.129124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.129216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.129243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.129331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.129358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.129467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.129493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.129577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.129604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.129743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.129769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.129883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.129932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.130039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.130064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.130174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.130199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.157 qpair failed and we were unable to recover it. 00:30:30.157 [2024-12-05 14:02:01.130344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.157 [2024-12-05 14:02:01.130371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.130511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.130551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.130649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.130677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.130797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.130823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.130931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.130958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.131051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.131077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.131220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.131246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.131342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.131367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.131461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.131487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.131583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.131609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.131694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.131719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.131797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.131823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.131912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.131938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.132036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.132063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.132165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.132196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.132280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.132306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.132388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.132422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.132499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.132526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.132609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.132635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.132721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.132747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.132855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.132881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.133024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.133052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.133263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.133302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.133397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.133433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.133555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.133584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.133679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.133706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.133845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.133871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.133989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.134016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.134116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.134142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.134239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.134278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.134396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.134428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.134543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.134569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.134655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.134681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.158 qpair failed and we were unable to recover it. 00:30:30.158 [2024-12-05 14:02:01.134857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.158 [2024-12-05 14:02:01.134906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.134985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.135010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.135110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.135148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.135246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.135272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.135409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.135447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.135564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.135590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.135675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.135700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.135776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.135802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.135895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.135928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.136029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.136068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.136196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.136236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.136357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.136383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.136505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.136532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.136647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.136672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.136763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.136789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.136903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.136930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.137044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.137070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.137181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.137206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.137320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.137348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.137481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.137511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.137629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.137655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.137784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.137811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.137934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.137961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.138043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.138069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.138185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.138211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.138296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.138324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.138430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.138470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.138606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.138634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.138722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.138748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.138864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.138891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.138983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.139011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.139125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.139151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.139236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.139262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.139375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.139401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.139539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.139565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.139678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.139705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.139781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.139807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.139980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.140029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.159 [2024-12-05 14:02:01.140119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.159 [2024-12-05 14:02:01.140158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.159 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.140272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.140301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.140388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.140414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.140539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.140565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.140660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.140686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.140778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.140805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.140891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.140919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.141037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.141063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.141166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.141205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.141325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.141353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.141450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.141478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.141600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.141626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.141734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.141759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.141853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.141891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.142002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.142027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.142117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.142146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.142274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.142301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.142413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.142445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.142535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.142562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.142644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.142671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.142750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.142776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.142869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.142896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.143033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.143059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.143141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.143166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.143291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.143317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.143411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.143447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.143560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.143585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.143720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.143758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.143910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.143958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.144095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.144121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.144235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.144262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.144357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.144396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.144540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.144579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.144702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.144730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.144811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.144844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.144945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.144971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.145064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.145091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.145175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.145208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.160 qpair failed and we were unable to recover it. 00:30:30.160 [2024-12-05 14:02:01.145322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.160 [2024-12-05 14:02:01.145348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.145456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.145483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.145560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.145586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.145664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.145691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.145809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.145837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.145920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.145947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.146086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.146125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.146243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.146271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.146410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.146445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.146563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.146590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.146734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.146762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.146849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.146876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.146958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.146985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.147106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.147132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.147218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.147244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.147359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.147385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.147500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.147526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.147644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.147670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.147751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.147776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.147857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.147883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.147964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.147991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.148125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.148164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.148303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.148342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.148440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.148468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.148588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.148615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.148721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.148747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.148842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.148881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.148979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.149008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.149098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.149128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.149228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.149256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.149351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.149378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.149507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.149535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.149624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.149651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.149740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.149767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.149881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.149908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.150022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.150050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.150168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.150194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.150270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.150296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.161 qpair failed and we were unable to recover it. 00:30:30.161 [2024-12-05 14:02:01.150411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.161 [2024-12-05 14:02:01.150445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.150534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.150565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.150646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.150672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.150808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.150834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.150918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.150943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.151047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.151073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.151173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.151212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.151330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.151358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.151478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.151505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.151618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.151644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.151765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.151791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.151876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.151901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.152012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.152039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.152126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.152152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.152273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.152299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.152437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.152465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.152549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.152577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.152692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.152718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.152855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.152881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.152958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.152985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.153098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.153124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.153236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.153263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.153344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.153370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.153477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.153517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.153635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.153663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.153759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.153785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.153897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.153923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.154028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.154054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.154187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.154226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.154319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.154346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.154432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.154458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.154574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.154601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.154690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.154716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.154831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.154856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.154972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.154999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.155086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.155111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.155224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.155252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.155337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.155363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.162 [2024-12-05 14:02:01.155454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.162 [2024-12-05 14:02:01.155482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.162 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.155621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.155648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.155785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.155836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.155970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.156026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.156168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.156195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.156335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.156360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.156525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.156565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.156692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.156719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.156895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.156948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.157095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.157146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.157229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.157254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.157348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.157373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.157504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.157531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.157609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.157635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.157722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.157747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.157854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.157907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.157991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.158017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.158137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.158163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.158296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.158335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.158433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.158462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.158575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.158601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.158693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.158720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.158836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.158864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.158980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.159005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.159086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.159112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.159226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.159255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.159342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.159381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.159482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.159511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.159605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.159631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.159746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.159796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.159904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.159958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.160100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.160150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.160286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.160312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.160441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.160480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.163 [2024-12-05 14:02:01.160574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.163 [2024-12-05 14:02:01.160603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.163 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.160700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.160729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.160847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.160874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.160960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.160986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.161116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.161167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.161250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.161276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.161394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.161430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.161552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.161579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.161696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.161722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.161842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.161869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.161963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.161989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.162100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.162126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.162207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.162235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.162315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.162344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.162469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.162496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.162579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.162605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.162710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.162736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.162830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.162859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.162976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.163002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.163142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.163168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.163251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.163277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.163364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.163390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.163486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.163514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.163649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.163677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.163795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.163823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.163936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.163963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.164091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.164145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.164258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.164285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.164367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.164393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.164486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.164513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.164621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.164649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.164741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.164768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.164878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.164904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.165046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.165072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.165179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.165206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.165328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.165355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.165459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.165504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.165623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.165651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.165729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.164 [2024-12-05 14:02:01.165755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.164 qpair failed and we were unable to recover it. 00:30:30.164 [2024-12-05 14:02:01.165867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.165905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.166057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.166106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.166193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.166219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.166307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.166333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.166448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.166475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.166588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.166614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.166733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.166758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.166897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.166923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.167040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.167065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.167182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.167210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.167348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.167375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.167477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.167516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.167637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.167664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.167749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.167775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.167861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.167888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.168005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.168032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.168113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.168139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.168214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.168240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.168358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.168384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.168493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.168532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.168621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.168649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.168757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.168782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.168861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.168886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.169012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.169040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.169129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.169162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.169259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.169287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.169427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.169454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.169539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.169565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.169651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.169676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.169809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.169858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.170002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.170049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.170130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.170156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.170246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.170273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.170353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.170379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.170464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.170491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.170587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.170612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.170698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.170724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.170838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.165 [2024-12-05 14:02:01.170864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.165 qpair failed and we were unable to recover it. 00:30:30.165 [2024-12-05 14:02:01.170989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.171016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.171107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.171135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.171243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.171282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.171381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.171409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.171518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.171544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.171635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.171661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.171783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.171811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.171900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.171927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.172033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.172060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.172170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.172197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.172305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.172330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.172412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.172460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.172588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.172615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.172738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.172764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.172869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.172895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.172986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.173011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.173126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.173152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.173276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.173316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.173427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.173457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.173580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.173608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.173725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.173751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.173945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.173973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.174060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.174086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.174194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.174221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.174327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.174353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.174440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.174466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.174554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.174586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.174676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.174702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.174785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.174812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.174903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.174929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.175024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.175050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.175144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.175183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.175302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.175331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.175437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.175467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.175582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.175608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.175723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.175749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.175832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.175859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.175978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.176005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.166 qpair failed and we were unable to recover it. 00:30:30.166 [2024-12-05 14:02:01.176128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.166 [2024-12-05 14:02:01.176166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.176328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.176367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.176480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.176509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.176602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.176628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.176712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.176738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.176858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.176885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.177000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.177028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.177119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.177145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.177264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.177290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.177487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.177513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.177634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.177662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.177805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.177832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.177918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.177944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.178056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.178082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.178200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.178227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.178370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.178410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.178513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.178540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.178659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.178685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.178771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.178798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.178904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.178931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.179040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.179067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.179183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.179211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.179333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.179364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.179487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.179515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.179598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.179624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.179722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.179747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.179833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.179859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.180010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.180060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.180156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.180181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.180293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.180319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.180427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.180454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.180563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.180589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.180679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.180705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.180785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.167 [2024-12-05 14:02:01.180811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.167 qpair failed and we were unable to recover it. 00:30:30.167 [2024-12-05 14:02:01.180924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.180951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.181059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.181085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.181196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.181222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.181342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.181381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.181486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.181516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.181627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.181666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.181773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.181802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.181915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.181942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.182037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.182064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.182188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.182215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.182332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.182358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.182472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.182498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.182593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.182619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.182728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.182754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.182867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.182892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.182978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.183005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.183090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.183115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.183196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.183222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.183364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.183390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.183524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.183562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.183686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.183714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.183844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.183889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.184116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.184167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.184297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.184327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.184410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.184447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.184533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.184560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.184656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.184683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.184770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.184797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.184931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.184969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.185103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.185155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.185266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.185293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.185377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.185403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.185568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.185607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.185737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.185789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.185929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.185974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.186069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.186095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.186209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.186234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.186342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.168 [2024-12-05 14:02:01.186368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.168 qpair failed and we were unable to recover it. 00:30:30.168 [2024-12-05 14:02:01.186493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.186520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.186610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.186636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.186717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.186743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.186821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.186846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.186953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.186979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.187071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.187097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.187175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.187201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.187314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.187340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.187425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.187451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.187564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.187591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.187702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.187732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.187838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.187864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.188003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.188029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.188115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.188140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.188223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.188249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.188361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.188388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.188486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.188526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.188636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.188675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.188775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.188804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.188919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.188969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.189050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.189076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.189185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.189212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.189322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.189348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.189451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.189477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.189590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.189616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.189705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.189731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.189816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.189842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.189980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.190006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.190123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.190151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.190278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.190318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.190434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.190463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.190607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.190633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.190738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.190787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.190932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.190981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.191113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.191166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.191286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.191313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.191393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.191430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.191523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.169 [2024-12-05 14:02:01.191551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.169 qpair failed and we were unable to recover it. 00:30:30.169 [2024-12-05 14:02:01.191636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.191662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.191796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.191846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.191992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.192035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.192178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.192205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.192335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.192374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.192496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.192524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.192630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.192655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.192776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.192802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.192976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.193024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.193126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.193162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.193324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.193349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.193438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.193465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.193558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.193590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.193677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.193703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.193784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.193810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.193926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.193951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.194063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.194089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.194207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.194233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.194320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.194345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.194487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.194513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.194631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.194656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.194734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.194759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.194869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.194895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.195011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.195037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.195128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.195154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.195307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.195346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.195461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.195492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.195693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.195720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.195859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.195886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.196001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.196049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.196203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.196251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.196336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.196362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.196486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.196517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.196637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.196664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.196754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.196780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.196860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.196886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.196972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.196997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.197107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.197133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.170 [2024-12-05 14:02:01.197211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.170 [2024-12-05 14:02:01.197237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.170 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.197350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.197383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.197490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.197529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.197629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.197658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.197744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.197771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.197909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.197935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.198021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.198046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.198123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.198149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.198235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.198261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.198378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.198404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.198497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.198523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.198615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.198640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.198725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.198753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.198845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.198872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.198981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.199008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.199123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.199172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.199289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.199315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.199455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.199482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.199596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.199623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.199744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.199770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.199885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.199912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.199994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.200021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.200135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.200161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.200299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.200325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.200413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.200444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.200564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.200589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.200710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.200737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.200817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.200843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.200957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.200987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.201099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.201124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.201211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.201239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.201372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.201411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.201541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.201570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.201682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.201709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.201795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.201822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.201911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.201937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.202048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.202075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.202192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.171 [2024-12-05 14:02:01.202219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.171 qpair failed and we were unable to recover it. 00:30:30.171 [2024-12-05 14:02:01.202328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.202354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.202436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.202463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.202569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.202607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.202702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.202730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.202844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.202870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.203014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.203042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.203186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.203213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.203301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.203329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.203451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.203479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.203567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.203593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.203707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.203733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.203841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.203867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.203956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.203981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.204095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.204120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.204233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.204258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.204353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.204392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.204497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.204526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.204627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.204655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.204737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.204763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.204853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.204881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.205024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.205050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.205133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.205159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.205250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.205279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.205392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.205425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.205540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.205568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.205656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.205682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.205819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.205865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.205976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.206001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.206090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.206117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.206208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.206233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.206350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.206384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.206487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.206515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.206632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.206658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.206774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.172 [2024-12-05 14:02:01.206802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.172 qpair failed and we were unable to recover it. 00:30:30.172 [2024-12-05 14:02:01.206900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.206927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.207042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.207067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.207152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.207178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.207266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.207294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.207392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.207422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.207506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.207531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.207610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.207637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.207753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.207779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.207890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.207916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.208007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.208034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.208138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.208177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.208298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.208326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.208407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.208442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.208537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.208564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.208680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.208707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.208791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.208817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.208909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.208935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.209078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.209105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.209219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.209245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.209361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.209388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.209483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.209509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.209588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.209614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.209796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.209821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.209932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.209958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.210055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.210094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.210198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.210226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.210345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.210373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.210507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.210534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.210621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.210647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.210762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.210788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.210871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.210899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.211029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.211057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.211186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.211224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.211310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.211342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.211424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.211451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.211590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.211616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.211721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.211751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.211836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.211862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.211942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.211967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.212049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.212076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.173 qpair failed and we were unable to recover it. 00:30:30.173 [2024-12-05 14:02:01.212170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.173 [2024-12-05 14:02:01.212198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.212288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.212314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.212454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.212481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.212590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.212616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.212734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.212763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.212860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.212887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.213001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.213027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.213113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.213141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.213269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.213308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.213432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.213459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.213580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.213606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.213693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.213720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.213806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.213832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.213970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.213995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.214115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.214142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.214227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.214254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.214347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.214375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.214476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.214503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.214589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.214615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.214731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.214759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.214845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.214871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.214976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.215001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.215140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.215188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.215281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.215308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.215447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.215486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.215606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.215633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.215764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.215812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.215950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.215998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.216086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.216115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.216229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.216256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.216369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.216395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.216519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.216545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.216658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.216684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.216791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.216817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.216900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.216925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.217050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.217078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.217192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.217218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.217311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.217336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.217425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.217451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.217526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.217551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.217666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.217690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.174 [2024-12-05 14:02:01.217806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.174 [2024-12-05 14:02:01.217831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.174 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.217949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.217974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.218059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.218085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.218206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.218232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.218339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.218363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.218474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.218501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.218613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.218637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.218751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.218775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.218891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.218916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.219002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.219028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.219116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.219140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.219254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.219279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.219365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.219390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.219483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.219508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.219614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.219639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.219725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.219749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.219854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.219878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.219970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.219994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.220111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.220136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.220250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.220274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.220390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.220414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.220507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.220533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.220616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.220647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.220736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.220762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.220877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.220903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.220980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.221005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.221102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.221130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.221212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.221238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.221321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.221347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.221449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.221477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.221593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.221619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.221711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.221739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.221858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.221885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.222002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.222028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.222146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.222173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.222253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.222280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.222391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.222426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.222511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.222537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.222652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.222679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.222771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.222797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.222875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.222901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.175 [2024-12-05 14:02:01.222979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.175 [2024-12-05 14:02:01.223004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.175 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.223134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.223171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.223302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.223341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.223461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.223490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.223602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.223627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.223706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.223733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.223855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.223882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.223962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.223989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.224071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.224097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.224201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.224240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.224373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.224400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.224525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.224551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.224636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.224661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.224742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.224768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.224881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.224907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.225015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.225065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.225157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.225184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.225307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.225333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.225423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.225450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.225535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.225561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.225676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.225702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.225784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.225814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.225895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.225921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.226059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.226085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.226196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.226221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.226363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.226389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.226512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.226538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.226622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.226647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.226779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.226828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.226911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.226937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.227017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.227043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.227143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.227175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.227275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.227314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.227415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.227458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.227578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.227605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.227730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.227756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.227845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.227871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.227960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.227986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.228068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.228093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.228188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.176 [2024-12-05 14:02:01.228214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.176 qpair failed and we were unable to recover it. 00:30:30.176 [2024-12-05 14:02:01.228311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.228337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.228429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.228456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.228569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.228596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.228707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.228733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.228845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.228872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.228985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.229011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.229101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.229128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.229282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.229322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.229433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.229473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.229607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.229636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.229755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.229782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.229867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.229893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.229996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.230022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.230119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.230146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.230256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.230282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.230393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.230426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.230544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.230570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.230658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.230684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.230801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.230827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.230904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.230932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.231025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.231059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.231152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.231179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.231272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.231298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.231379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.231405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.231536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.231562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.231646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.231672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.231759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.231786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.231871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.231897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.231974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.232000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.177 [2024-12-05 14:02:01.232087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.177 [2024-12-05 14:02:01.232113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.177 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.232230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.232256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.232373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.232400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.232491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.232517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.232599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.232625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.232715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.232742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.232866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.232892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.232976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.233001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.233114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.233140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.233245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.233270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.233360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.233385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.233473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.233500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.233625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.233663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.233795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.233833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.233931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.233958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.234048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.234085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.234225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.234262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.234355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.234383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.234532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.234559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.234641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.234672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.234755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.234782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.234863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.234890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.234981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.235010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.235112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.235140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.235256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.235283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.235395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.235427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.235510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.235537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.235646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.235672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.235777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.235803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.235889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.235915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.236006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.236033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.236185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.236224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.236314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.236341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.236441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.236469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.236584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.236611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.236720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.236746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.236826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.236852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.236954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.236982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.178 qpair failed and we were unable to recover it. 00:30:30.178 [2024-12-05 14:02:01.237070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.178 [2024-12-05 14:02:01.237098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.237195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.237223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.237311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.237338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.237460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.237496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.237591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.237619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.237711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.237737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.237831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.237858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.237947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.237973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.238121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.238148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.238258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.238286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.238409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.238445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.238543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.238571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.238690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.238717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.238798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.238824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.238951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.239001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.239104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.239161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.239263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.239301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.239395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.239434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.239534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.239561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.239654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.239680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.239757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.239783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.239902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.239934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.240052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.240079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.240163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.240189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.240274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.240302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.240455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.240482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.240568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.240595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.240704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.240730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.240817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.240844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.240933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.240960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.179 [2024-12-05 14:02:01.241046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.179 [2024-12-05 14:02:01.241073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.179 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.241155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.241181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.241260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.241287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.241397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.241432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.241521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.241547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.241643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.241670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.241753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.241780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.241914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.241940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.242066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.242092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.242214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.242242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.242362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.242388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.242483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.242509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.242628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.242654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.242739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.242765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.242851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.242879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.242997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.243023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.243132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.243159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.243248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.243275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.243378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.243404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.243537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.243576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.243672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.243710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.243801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.243829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.243942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.243968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.244067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.244092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.244186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.180 [2024-12-05 14:02:01.244212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.180 qpair failed and we were unable to recover it. 00:30:30.180 [2024-12-05 14:02:01.244342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.244369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.244484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.244514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.244606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.244635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.244748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.244774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.244950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.244976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.245095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.245122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.245239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.245272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.245403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.245436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.245525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.245551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.245670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.245696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.245781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.245807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.245908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.245942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.246102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.246151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.246272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.246299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.246395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.246426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.246519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.246544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.246636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.246662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.246775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.246801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.246896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.246923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.247014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.247041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.247168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.247207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.247299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.247327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.247406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.247440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.247554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.247582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.247679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.247705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.181 [2024-12-05 14:02:01.247824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.181 [2024-12-05 14:02:01.247849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.181 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.247932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.247959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.248079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.248105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.248191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.248217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.248312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.248339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.248469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.248495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.248583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.248610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.248686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.248712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.248791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.248823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.248916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.248942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.249026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.249052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.249143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.249171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.249299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.249338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.249457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.249484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.249576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.249602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.249713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.249739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.249822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.249848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.249933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.249960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.250052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.250079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.250170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-12-05 14:02:01.250195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-12-05 14:02:01.250279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.250304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.250398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.250430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.250517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.250542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.250655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.250681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.250792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.250819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.250912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.250942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.251032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.251059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.251142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.251169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.251276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.251302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.251430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.251461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.251551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.251577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.251666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.251692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.251789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.251824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.251973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.252026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.252164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.252211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.252308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.252337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.252437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.252464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.252580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.252607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.252723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.252750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.252852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.252887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.253011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.253037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.253122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.253150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.253244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.253270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.253359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.253385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-12-05 14:02:01.253508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-12-05 14:02:01.253535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.253627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.253654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.253780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.253807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.253887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.253913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.254029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.254060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.254142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.254170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.254258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.254287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.254402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.254435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.254525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.254552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.254646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.254672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.254789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.254815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.254902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.254930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.255044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.255071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.255156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.255182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.255275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.255302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.255413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.255444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.255527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.255553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.255683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.255709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.255792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.255818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.255909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.255936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.256048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.256075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.256158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.256184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.256293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.256320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.256408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.256445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.256532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.256559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.256647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.256673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-12-05 14:02:01.256751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-12-05 14:02:01.256777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.256884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.256910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.257022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.257048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.257140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.257166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.257244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.257269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.257365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.257398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.257506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.257534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.257652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.257680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.257798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.257825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.257916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.257943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.258034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.258062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.258212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.258238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.258321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.258347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.258446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.258472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.258583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.258608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.258723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.258749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.258830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.258857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.258977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.259005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.259096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.259129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.259229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.259268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.259352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.259379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.259514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.259541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.259628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.259654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.259734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.259761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.259845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.259871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.260022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.260050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.260148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.260174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.260293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.260320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.260435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.260462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.260552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-12-05 14:02:01.260579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-12-05 14:02:01.260688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.260715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.260832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.260858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.260980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.261006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.261118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.261144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.261226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.261252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.261340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.261368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.261491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.261519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.261597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.261623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.261733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.261759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.261873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.261900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.261990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.262016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.262133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.262160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.262270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.262309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.262433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.262461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.262542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.262568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.262669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.262696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.262789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.262816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.262909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.262935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.263055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.263081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.263159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.263185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.263304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.263331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.263422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.263448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.263530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.263556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.263696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.263722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.263802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.263828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.263931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.263969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.264078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.264128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.264215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.264243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.264357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.264388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.264524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.264550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.264637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.264663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.264769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.264803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.264918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.264944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.265044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-12-05 14:02:01.265071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-12-05 14:02:01.265158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.265185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.265298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.265324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.265404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.265437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.265522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.265549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.265634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.265661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.265774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.265801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.265885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.265910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.266019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.266045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.266138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.266163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.266260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.266299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.266439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.266478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.266565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.266592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.266734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.266782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.266894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.266943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.267082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.267128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.267220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.267247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.267335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.267361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.267459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.267487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.267579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.267605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.267747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.267795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.267922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.267968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.268085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.268117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.268219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.268258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.268382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.268409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.268502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.268528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.268614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.268640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.268759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.268785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.268901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.268926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.269019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.269046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.269127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.269153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.269238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.269264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.269372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.269397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.269495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.269524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.269619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.269647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.269806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.269833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.269924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.269950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.270050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.270084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.270182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-12-05 14:02:01.270209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-12-05 14:02:01.270323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.270350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.270468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.270494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.270581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.270606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.270693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.270718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.270803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.270832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.270977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.271026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.271132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.271182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.271270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.271297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.271382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.271410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.271500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.271526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.271618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.271644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.271735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.271760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.271842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.271868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.271976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.272002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.272098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.272127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.272232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.272272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.272368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.272396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.272501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.272527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.272612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.272639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.272721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.272747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.272862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.272888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.272976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.273002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.273090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.273115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.273195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.273220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.273332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.273357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.273449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.273478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.273567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.273594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.273687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.273715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.273826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.273852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.273970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.273997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.274078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.274104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.274221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.274247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-12-05 14:02:01.274333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-12-05 14:02:01.274360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.274452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.274480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.274567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.274594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.274679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.274705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.274794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.274820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.274913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.274940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.275050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.275089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.275186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.275214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.275299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.275325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.275434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.275461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.275551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.275577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.275669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.275696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.275812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.275838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.275926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.275952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.276032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.276059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.276153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.276180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.276257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.276283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.276363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.276389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.276474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.276506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.276607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.276633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.276748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.276774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.276862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.276888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.276983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.277022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.277144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.277172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.277255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.277281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.277375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.277401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.277507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.277533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.277618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.277646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.277762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.277789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.277889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.277915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.278000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.278026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.278138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.278164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.278304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.278330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.278408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.278443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.278538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.278563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.278647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.278673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.278785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.278812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.278927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.278953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-12-05 14:02:01.279043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-12-05 14:02:01.279072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.279159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.279185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.279272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.279299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.279412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.279446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.279570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.279597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.279679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.279705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.279784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.279810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.279898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.279926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.280012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.280039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.280135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.280174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.280281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.280310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.280460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.280487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.280576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.280603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.280719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.280745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.280857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.280884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.280977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.281004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.281122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.281151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.281286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.281325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.281412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.281448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.281586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.281612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.281731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.281762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.281846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.281871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.281983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.282008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.282115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.282140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.282219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.282244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.282352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.282377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.282481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.282509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.282601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.282627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.282716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.282742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.282821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.282847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.282932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.282957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.283037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.283062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.283157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.283197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.283317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.283344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.283441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.283469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.283563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.283589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.283674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.283699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.283819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.283845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-12-05 14:02:01.283932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-12-05 14:02:01.283958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.284048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.284077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.284160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.284187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.284313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.284341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.284456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.284482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.284565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.284591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.284695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.284721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.284819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.284852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.284981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.285007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.285094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.285125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.285238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.285263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.285353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.285378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.285472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.285498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.285585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.285611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.285706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.285746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.285873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.285902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.286000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.286028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.286136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.286162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.286252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.286278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.286373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.286401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.286501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.286528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.286610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.286635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.286744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.286770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.286866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.286892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.286983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.287009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.287104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.287129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.287213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.287239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.287323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.287349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.287464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.287491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.287574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.287600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.287691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.287717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.287809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.287834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.287917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.287943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.288049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.288075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.288194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.288219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.288301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.288327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.288403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.288438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.288538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-12-05 14:02:01.288566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-12-05 14:02:01.288677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.288704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.288804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.288830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.288944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.288970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.289063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.289089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.289175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.289202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.289294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.289320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.289480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.289520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.289612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.289640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.289730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.289757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.289868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.289894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.290008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.290035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.290118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.290145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.290246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.290273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.290367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.290406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.290501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.290529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.290613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.290640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.290805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.290852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.290928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.290954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.291065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.291112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.291202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.291228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.291324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.291350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.291471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.291499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.291610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.291635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.291719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.291745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.291834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.291859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.291951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.291980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.292062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.292088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.292203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.292229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.292319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.292347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.292441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.292468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.292554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.292580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.292663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.292688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.292768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.292794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-12-05 14:02:01.292876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-12-05 14:02:01.292902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.293026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.293052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.293165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.293193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.293280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.293309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.293429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.293457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.293572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.293599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.293692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.293719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.293834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.293885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.293987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.294020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.294149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.294175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.294258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.294284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.294398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.294430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.294514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.294539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.294670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.294695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.294790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.294816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.294898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.294924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.295030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.295055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.295178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.295203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.295285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.295311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.295395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.295439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.295558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.295583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.295699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.295724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.295812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.295838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.295917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.295942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.296030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.296055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.296142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.296168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.296278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.296305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.296430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.296460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.296592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.296630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.296761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.296789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.296897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.296924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.297063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.297089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.297199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.297225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.297357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.297384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.297488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.297516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.297604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.297631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.297748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.297775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.297867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.297894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.297983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-12-05 14:02:01.298009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-12-05 14:02:01.298117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.298143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.298220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.298246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.298355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.298393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.298500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.298529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.298648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.298676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.298815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.298841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.298932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.298958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.299104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.299153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.299276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.299304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.299396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.299427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.299544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.299571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.299658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.299684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.299763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.299790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.299902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.299951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.300035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.300062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.300153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.300180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.300289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.300315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.300432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.300459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.300573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.300600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.300719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.300746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.300831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.300862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.300943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.300969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.301052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.301079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.301174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.301213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.301321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.301360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.301479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.301507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.301587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.301613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.301702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.301728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.301806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.301832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.301907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.301932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.302021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.302047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.302161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.302188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.302276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.302303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.302411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.302464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.302591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.302618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.302703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.302729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.302865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.302891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.302976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-12-05 14:02:01.303001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-12-05 14:02:01.303144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.303188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.303299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.303325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.303421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.303448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.303555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.303580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.303665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.303691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.303800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.303826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.303916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.303941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.304023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.304048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.304156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.304196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.304291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.304325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.304411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.304443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.304549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.304574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.304680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.304705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.304796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.304821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.304910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.304936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.305017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.305043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.305149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.305174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.305261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.305288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.305380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.305411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.305515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.305544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.305663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.305690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.305773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.305800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.305910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.305937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.306019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.306046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.306140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.306167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.306255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.306280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.306369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.306397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.306490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.306517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.306637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.306663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.306748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.306773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.306886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.306913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.307004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.307029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.307134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.307160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.307244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.307270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.307354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.307379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.307489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.307528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.307630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.307659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-12-05 14:02:01.307767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-12-05 14:02:01.307794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.307872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.307898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.308023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.308050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.308149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.308190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.308287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.308313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.308410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.308458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.308572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.308598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.308719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.308745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.308832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.308858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.309002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.309029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.309116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.309142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.309227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.309254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.309370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.309401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.309531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.309558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.309669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.309696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.309806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.309832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.309948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.309974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.310059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.310086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.310205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.310231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.310332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.310370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.310469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.310498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.310593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.310620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.310701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.310727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.310819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.310847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.310958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.310984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.311071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.311099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.311192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.311219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.311300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.311326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.311438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.311465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.311579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.311606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.311751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.311789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.311896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.311923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.312004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.312030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.312146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.312193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.312302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.312327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.312434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.312460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.312544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.312570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.312656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.312682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.312769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-12-05 14:02:01.312794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-12-05 14:02:01.312880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.312912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.313011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.313037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.313118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.313146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.313283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.313309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.313395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.313429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.313552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.313578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.313666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.313692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.313807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.313854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.313991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.314040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.314132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.314159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.314280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.314308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.314430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.314457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.314539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.314567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.314649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.314675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.314770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.314796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.314913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.314941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.315070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.315096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.315208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.315233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.315315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.315343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.315462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.315489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.315603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.315628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.315708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.315735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.315822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.315850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.315950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.315988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.316085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.316112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.316203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.316231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.316317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.316343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.316427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.316453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.316536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.316561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.316666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.316692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.316807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.316833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.316919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.316945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-12-05 14:02:01.317036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-12-05 14:02:01.317062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.317179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.317207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.317298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.317326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.317414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.317451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.317548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.317575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.317683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.317709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.317796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.317822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.317935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.317961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.318039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.318070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.318150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.318176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.318273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.318300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.318421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.318450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.318534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.318560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.318645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.318670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.318787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.318813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.318934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.318960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.319042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.319067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.319147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.319174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.319296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.319322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.319405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.319438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.319515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.319541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.319652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.319677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.319772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.319800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.319888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.319915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.320014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.320040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.320129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.320155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.320242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.320268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.320371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.320411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.320565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.320593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.320673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.320700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.320811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.320838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.320917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.320944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.321031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.321059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.321188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.321215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.321300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.321326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.321450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.321482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.321563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.321590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.321681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-12-05 14:02:01.321708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-12-05 14:02:01.321823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.321849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.321934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.321961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.322111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.322140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.322223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.322250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.322335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.322360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.322473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.322499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.322588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.322614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.322692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.322718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.322806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.322832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.322950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.322977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.323062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.323090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.323177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.323205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.323290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.323318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.323425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.323451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.323589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.323615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.323697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.323723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.323802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.323828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.323991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.324038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.324125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.324151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.324230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.324258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.324368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.324394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.324484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.324512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.324606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.324631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.324720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.324746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.324834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.324862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.324985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.325012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.325101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.325127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.325219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.325247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.325336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.325362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.325482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.325511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.325610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.325637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.325733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.325758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.325842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.325868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.325955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.325981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.326075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.326113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.326202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.326230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.326350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.326378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.326468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-12-05 14:02:01.326500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-12-05 14:02:01.326589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.326616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.326735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.326762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.326850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.326876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.326992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.327018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.327108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.327136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.327221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.327248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.327363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.327391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.327518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.327546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.327629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.327655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.327766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.327792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.327904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.327930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.328022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.328049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.328136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.328163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.328333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.328362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.328458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.328485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.328577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.328604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.328682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.328708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.328905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.328931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.329047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.329072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.329163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.329191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.329280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.329307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.329401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.329434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.329525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.329551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.329670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.329697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.329785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.329811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.329895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.329922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.330019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.330058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.330158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.330184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.330294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.330320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.330435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.330462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.330551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.330577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.330658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.330683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.330771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.330796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.330914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.330939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.331035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.331060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.331143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.331168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.331249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.331277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-12-05 14:02:01.331367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-12-05 14:02:01.331394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.331492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.331518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.331655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.331680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.331770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.331796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.331887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.331912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.332025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.332051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.332148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.332177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.332315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.332342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.332441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.332469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.332553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.332580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.332689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.332716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.332806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.332832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.332922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.332950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.333066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.333094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.333177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.333204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.333284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.333310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.333441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.333480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.333602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.333629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.333760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.333788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.333899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.333925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.334063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.334111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.334200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.334227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.334315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.334340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.334431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.334459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.334553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.334580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.334659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.334685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.334766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.334793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.334902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.334929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.335044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.335071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.335154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.335184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.335301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.335328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.335444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.335472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.335671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.335696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-12-05 14:02:01.335779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-12-05 14:02:01.335805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-12-05 14:02:01.335914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-12-05 14:02:01.335939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-12-05 14:02:01.336016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-12-05 14:02:01.336042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-12-05 14:02:01.336120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-12-05 14:02:01.336146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-12-05 14:02:01.336232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-12-05 14:02:01.336271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-12-05 14:02:01.336359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-12-05 14:02:01.336388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-12-05 14:02:01.336489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-12-05 14:02:01.336518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-12-05 14:02:01.336599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-12-05 14:02:01.336626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-12-05 14:02:01.336763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-12-05 14:02:01.336811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-12-05 14:02:01.336918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-12-05 14:02:01.336965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.337086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.337113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.337208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.337236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.337358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.337385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.337487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.337514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.337599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.337626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.337766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.337791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.337911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.337939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.338045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.338095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.338233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.338284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.338392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.338423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.338513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.338540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.338627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.338653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.338736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.338761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.338849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.338879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.338978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.339004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.339111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.339136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.339250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.339276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.339354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.339379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.339488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.339517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.339604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.339630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.339714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-12-05 14:02:01.339741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-12-05 14:02:01.339829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.339855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.339963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.339990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.340098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.340137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.340231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.340258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.340378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.340403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.340535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.340561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.340651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.340677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.340792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.340818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.340928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.340953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.341066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.341091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.341219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.341258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.341385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.341413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.341520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.341547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.341639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.341665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.341811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.341837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.341923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.341950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.342065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.342114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.342229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.342256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.342339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.342366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.342477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.342508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.342606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.342631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.342707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.342733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-12-05 14:02:01.342842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-12-05 14:02:01.342868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.342976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.343002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.343092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.343118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.343233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.343261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.343342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.343369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.343452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.343479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.343589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.343616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.343731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.343757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.343833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.343860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.343988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.344014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.344133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.344159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.344266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.344305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.344391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.344427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.344530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.344569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.344672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.344702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.344852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.344900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.344990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.345018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.345131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.345158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.345265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.345292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.345378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.345406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.345513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.345541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.345637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.345664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.345781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.345807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.345896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.345924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.346069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.346096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.346188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.346214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.346332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.346361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.346470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.346509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-12-05 14:02:01.346624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-12-05 14:02:01.346652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.346765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.346791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.346897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.346930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.347032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.347058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.347151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.347176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.347259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.347287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.347430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.347469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.347570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.347598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.347683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.347709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.347793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.347824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.347972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.348022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.348126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.348175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.348263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.348289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.348400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.348433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.348547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.348574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.348652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.348677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.348786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.348813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.348926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.348952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.349067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.349093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.349177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.349203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.349295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.349323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.349446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.349475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.349599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.349627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.349721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.349747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.349860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.349886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.349979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.350005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.350117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.350166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.350306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.350332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.350430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.350458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.350578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.350605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.350699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.350725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.350810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.350837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.350959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.350985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.351073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.351101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.351218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.351247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.351363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.351389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.351490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.351522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.351608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.351635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.351747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.351772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.351888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.351914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.352003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.352030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.352119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.352145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.352246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.352273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.352357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.352383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.352482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.352508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.352595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.352622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.352736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.352761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.352886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.352912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.353027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-12-05 14:02:01.353055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-12-05 14:02:01.353162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.353188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.353315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.353342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.353434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.353462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.353573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.353599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.353711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.353737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.353818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.353845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.353945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.353972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.354091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.354130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.354229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.354256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.354342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.354368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.354481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.354507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.354620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.354646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.354727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.354753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.354859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.354884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.355014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.355044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.355159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.355186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.355304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.355332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.355472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.355500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.355617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.355643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.355758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.355784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.355906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.355933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.356028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.356054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.356145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.356172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.356253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.356280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.356397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.356441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.356570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.356597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.356735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.356762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.356881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.356912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.357078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.357126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.357215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.357243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.357372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.357410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.357542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.357569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.357687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.357713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.357822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.357847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.357957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.357983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.358064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.358089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.358167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.358193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.358316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.358342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.358427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.358453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.358544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.358570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-12-05 14:02:01.358678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-12-05 14:02:01.358704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.358829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.358855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.358999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.359027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.359111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.359137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.359253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.359280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.359391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.359429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.359547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.359573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.359660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.359687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.359799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.359826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.359919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.359946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.360034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.360062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.360174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.360202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.360309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.360334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.360435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.360475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.360595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.360622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.360739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.360765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.360885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.360911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.361017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.361043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.361132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.361157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.361271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.361298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.361421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.361450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.361577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.361614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.361740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.361767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.361882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.361908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.362021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.362046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.362124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.362150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.362256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.362282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.362370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.362398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.362502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.362529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.362612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.362639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.362751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.362777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.362899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.362925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.363043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.363070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.363162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.363188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.363273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.363298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.363381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.363407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.363522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.363548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.363624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.363650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.363764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.363789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.363896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.363921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.364008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.364034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.364152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-12-05 14:02:01.364177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-12-05 14:02:01.364261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.364287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.364397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.364430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.364515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.364541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.364623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.364648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.364725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.364751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.364862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.364887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.365000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.365027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.365150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.365175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.365293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.365319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.365433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.365462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.365552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.365579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.365683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.365721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.365817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.365850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.365942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.365969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.366090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.366117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.366213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.366239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.366356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.366382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.366504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.366531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.366648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.366675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.366753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.366779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.366859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.366885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.367003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.367049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.367188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.367214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.367298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.367323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.367410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.367445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.367548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.367585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.367716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.367743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.367839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.367872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.368012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-12-05 14:02:01.368048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-12-05 14:02:01.368151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.210 [2024-12-05 14:02:01.368180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.210 qpair failed and we were unable to recover it. 00:30:30.210 [2024-12-05 14:02:01.368293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.210 [2024-12-05 14:02:01.368319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.210 qpair failed and we were unable to recover it. 00:30:30.210 [2024-12-05 14:02:01.368437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.210 [2024-12-05 14:02:01.368464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.210 qpair failed and we were unable to recover it. 00:30:30.210 [2024-12-05 14:02:01.368550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.210 [2024-12-05 14:02:01.368575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.210 qpair failed and we were unable to recover it. 00:30:30.210 [2024-12-05 14:02:01.368686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.210 [2024-12-05 14:02:01.368711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.210 qpair failed and we were unable to recover it. 00:30:30.210 [2024-12-05 14:02:01.368788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.210 [2024-12-05 14:02:01.368814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.210 qpair failed and we were unable to recover it. 00:30:30.210 [2024-12-05 14:02:01.368924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.210 [2024-12-05 14:02:01.368950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.210 qpair failed and we were unable to recover it. 00:30:30.210 [2024-12-05 14:02:01.369092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.210 [2024-12-05 14:02:01.369118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.210 qpair failed and we were unable to recover it. 00:30:30.210 [2024-12-05 14:02:01.369251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.210 [2024-12-05 14:02:01.369290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.210 qpair failed and we were unable to recover it. 00:30:30.210 [2024-12-05 14:02:01.369387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.210 [2024-12-05 14:02:01.369415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.210 qpair failed and we were unable to recover it. 00:30:30.210 [2024-12-05 14:02:01.369521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.210 [2024-12-05 14:02:01.369548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.210 qpair failed and we were unable to recover it. 00:30:30.210 [2024-12-05 14:02:01.369638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.210 [2024-12-05 14:02:01.369664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.211 qpair failed and we were unable to recover it. 00:30:30.211 [2024-12-05 14:02:01.369746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.211 [2024-12-05 14:02:01.369772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.211 qpair failed and we were unable to recover it. 00:30:30.211 [2024-12-05 14:02:01.369854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.211 [2024-12-05 14:02:01.369881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.211 qpair failed and we were unable to recover it. 00:30:30.211 [2024-12-05 14:02:01.369968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.211 [2024-12-05 14:02:01.369995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.211 qpair failed and we were unable to recover it. 00:30:30.211 [2024-12-05 14:02:01.370092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.211 [2024-12-05 14:02:01.370120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.211 qpair failed and we were unable to recover it. 00:30:30.211 [2024-12-05 14:02:01.370239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.211 [2024-12-05 14:02:01.370267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.211 qpair failed and we were unable to recover it. 00:30:30.211 [2024-12-05 14:02:01.370377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.211 [2024-12-05 14:02:01.370405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.211 qpair failed and we were unable to recover it. 00:30:30.211 [2024-12-05 14:02:01.370495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.211 [2024-12-05 14:02:01.370521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.211 qpair failed and we were unable to recover it. 00:30:30.211 [2024-12-05 14:02:01.370609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.211 [2024-12-05 14:02:01.370634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.211 qpair failed and we were unable to recover it. 00:30:30.211 [2024-12-05 14:02:01.370720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.211 [2024-12-05 14:02:01.370746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.211 qpair failed and we were unable to recover it. 00:30:30.211 [2024-12-05 14:02:01.370834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.211 [2024-12-05 14:02:01.370860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.211 qpair failed and we were unable to recover it. 00:30:30.211 [2024-12-05 14:02:01.370946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.212 [2024-12-05 14:02:01.370972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.212 qpair failed and we were unable to recover it. 00:30:30.212 [2024-12-05 14:02:01.371095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.212 [2024-12-05 14:02:01.371128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.212 qpair failed and we were unable to recover it. 00:30:30.212 [2024-12-05 14:02:01.371228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.212 [2024-12-05 14:02:01.371261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.212 qpair failed and we were unable to recover it. 00:30:30.212 [2024-12-05 14:02:01.371390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.212 [2024-12-05 14:02:01.371436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.212 qpair failed and we were unable to recover it. 00:30:30.212 [2024-12-05 14:02:01.371535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.212 [2024-12-05 14:02:01.371561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.212 qpair failed and we were unable to recover it. 00:30:30.212 [2024-12-05 14:02:01.371645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.212 [2024-12-05 14:02:01.371671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.212 qpair failed and we were unable to recover it. 00:30:30.212 [2024-12-05 14:02:01.371788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.212 [2024-12-05 14:02:01.371813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.212 qpair failed and we were unable to recover it. 00:30:30.212 [2024-12-05 14:02:01.371927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.212 [2024-12-05 14:02:01.371953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.212 qpair failed and we were unable to recover it. 00:30:30.212 [2024-12-05 14:02:01.372047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.212 [2024-12-05 14:02:01.372073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.212 qpair failed and we were unable to recover it. 00:30:30.212 [2024-12-05 14:02:01.372201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.212 [2024-12-05 14:02:01.372240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.212 qpair failed and we were unable to recover it. 00:30:30.212 [2024-12-05 14:02:01.372332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.213 [2024-12-05 14:02:01.372359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.213 qpair failed and we were unable to recover it. 00:30:30.213 [2024-12-05 14:02:01.372454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.213 [2024-12-05 14:02:01.372482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.213 qpair failed and we were unable to recover it. 00:30:30.213 [2024-12-05 14:02:01.372576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.213 [2024-12-05 14:02:01.372603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.213 qpair failed and we were unable to recover it. 00:30:30.213 [2024-12-05 14:02:01.372683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.213 [2024-12-05 14:02:01.372709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.213 qpair failed and we were unable to recover it. 00:30:30.213 [2024-12-05 14:02:01.372807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.213 [2024-12-05 14:02:01.372833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.213 qpair failed and we were unable to recover it. 00:30:30.213 [2024-12-05 14:02:01.372916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.213 [2024-12-05 14:02:01.372942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.213 qpair failed and we were unable to recover it. 00:30:30.213 [2024-12-05 14:02:01.373027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.213 [2024-12-05 14:02:01.373053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.213 qpair failed and we were unable to recover it. 00:30:30.213 [2024-12-05 14:02:01.373143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.213 [2024-12-05 14:02:01.373170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.213 qpair failed and we were unable to recover it. 00:30:30.213 [2024-12-05 14:02:01.373256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.213 [2024-12-05 14:02:01.373282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.213 qpair failed and we were unable to recover it. 00:30:30.213 [2024-12-05 14:02:01.373377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.213 [2024-12-05 14:02:01.373402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.213 qpair failed and we were unable to recover it. 00:30:30.213 [2024-12-05 14:02:01.373501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.213 [2024-12-05 14:02:01.373526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.213 qpair failed and we were unable to recover it. 00:30:30.213 [2024-12-05 14:02:01.373609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.213 [2024-12-05 14:02:01.373636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.213 qpair failed and we were unable to recover it. 00:30:30.213 [2024-12-05 14:02:01.373726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.213 [2024-12-05 14:02:01.373752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.213 qpair failed and we were unable to recover it. 00:30:30.213 [2024-12-05 14:02:01.373868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.214 [2024-12-05 14:02:01.373893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.214 qpair failed and we were unable to recover it. 00:30:30.214 [2024-12-05 14:02:01.373981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.214 [2024-12-05 14:02:01.374006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.214 qpair failed and we were unable to recover it. 00:30:30.214 [2024-12-05 14:02:01.374089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.214 [2024-12-05 14:02:01.374114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.214 qpair failed and we were unable to recover it. 00:30:30.214 [2024-12-05 14:02:01.374235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.214 [2024-12-05 14:02:01.374262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.214 qpair failed and we were unable to recover it. 00:30:30.214 [2024-12-05 14:02:01.374395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.214 [2024-12-05 14:02:01.374441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.214 qpair failed and we were unable to recover it. 00:30:30.214 [2024-12-05 14:02:01.374542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.214 [2024-12-05 14:02:01.374581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.214 qpair failed and we were unable to recover it. 00:30:30.214 [2024-12-05 14:02:01.374702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.214 [2024-12-05 14:02:01.374730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.214 qpair failed and we were unable to recover it. 00:30:30.214 [2024-12-05 14:02:01.374810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.214 [2024-12-05 14:02:01.374838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.214 qpair failed and we were unable to recover it. 00:30:30.215 [2024-12-05 14:02:01.374933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.215 [2024-12-05 14:02:01.374960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.215 qpair failed and we were unable to recover it. 00:30:30.215 [2024-12-05 14:02:01.375073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.215 [2024-12-05 14:02:01.375099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.215 qpair failed and we were unable to recover it. 00:30:30.215 [2024-12-05 14:02:01.375192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.215 [2024-12-05 14:02:01.375220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.215 qpair failed and we were unable to recover it. 00:30:30.215 [2024-12-05 14:02:01.375314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.215 [2024-12-05 14:02:01.375340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.215 qpair failed and we were unable to recover it. 00:30:30.215 [2024-12-05 14:02:01.375455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.215 [2024-12-05 14:02:01.375482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.215 qpair failed and we were unable to recover it. 00:30:30.215 [2024-12-05 14:02:01.375566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.215 [2024-12-05 14:02:01.375592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.215 qpair failed and we were unable to recover it. 00:30:30.215 [2024-12-05 14:02:01.375678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.215 [2024-12-05 14:02:01.375707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.215 qpair failed and we were unable to recover it. 00:30:30.215 [2024-12-05 14:02:01.375798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.215 [2024-12-05 14:02:01.375824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.215 qpair failed and we were unable to recover it. 00:30:30.216 [2024-12-05 14:02:01.375918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.216 [2024-12-05 14:02:01.375944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.216 qpair failed and we were unable to recover it. 00:30:30.216 [2024-12-05 14:02:01.376035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.216 [2024-12-05 14:02:01.376061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.216 qpair failed and we were unable to recover it. 00:30:30.216 [2024-12-05 14:02:01.376179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.216 [2024-12-05 14:02:01.376210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.216 qpair failed and we were unable to recover it. 00:30:30.216 [2024-12-05 14:02:01.376364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.216 [2024-12-05 14:02:01.376391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.216 qpair failed and we were unable to recover it. 00:30:30.216 [2024-12-05 14:02:01.376499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.216 [2024-12-05 14:02:01.376538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.216 qpair failed and we were unable to recover it. 00:30:30.216 [2024-12-05 14:02:01.376629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.216 [2024-12-05 14:02:01.376656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.216 qpair failed and we were unable to recover it. 00:30:30.216 [2024-12-05 14:02:01.376771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.216 [2024-12-05 14:02:01.376797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.217 qpair failed and we were unable to recover it. 00:30:30.217 [2024-12-05 14:02:01.376884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.217 [2024-12-05 14:02:01.376932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.217 qpair failed and we were unable to recover it. 00:30:30.217 [2024-12-05 14:02:01.377148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.217 [2024-12-05 14:02:01.377195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.217 qpair failed and we were unable to recover it. 00:30:30.217 [2024-12-05 14:02:01.377286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.217 [2024-12-05 14:02:01.377311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.217 qpair failed and we were unable to recover it. 00:30:30.217 [2024-12-05 14:02:01.377389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.217 [2024-12-05 14:02:01.377422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.217 qpair failed and we were unable to recover it. 00:30:30.217 [2024-12-05 14:02:01.377515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.217 [2024-12-05 14:02:01.377540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.217 qpair failed and we were unable to recover it. 00:30:30.217 [2024-12-05 14:02:01.377619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.217 [2024-12-05 14:02:01.377644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.217 qpair failed and we were unable to recover it. 00:30:30.217 [2024-12-05 14:02:01.377751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.217 [2024-12-05 14:02:01.377777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.217 qpair failed and we were unable to recover it. 00:30:30.217 [2024-12-05 14:02:01.377895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.217 [2024-12-05 14:02:01.377921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.217 qpair failed and we were unable to recover it. 00:30:30.217 [2024-12-05 14:02:01.378010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.217 [2024-12-05 14:02:01.378036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.217 qpair failed and we were unable to recover it. 00:30:30.217 [2024-12-05 14:02:01.378142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.218 [2024-12-05 14:02:01.378182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.218 qpair failed and we were unable to recover it. 00:30:30.218 [2024-12-05 14:02:01.378313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.218 [2024-12-05 14:02:01.378341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.218 qpair failed and we were unable to recover it. 00:30:30.218 [2024-12-05 14:02:01.378485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.218 [2024-12-05 14:02:01.378512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.218 qpair failed and we were unable to recover it. 00:30:30.218 [2024-12-05 14:02:01.378596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.218 [2024-12-05 14:02:01.378623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.218 qpair failed and we were unable to recover it. 00:30:30.218 [2024-12-05 14:02:01.378744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.218 [2024-12-05 14:02:01.378771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.218 qpair failed and we were unable to recover it. 00:30:30.218 [2024-12-05 14:02:01.378859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.218 [2024-12-05 14:02:01.378885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.218 qpair failed and we were unable to recover it. 00:30:30.218 [2024-12-05 14:02:01.378968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.218 [2024-12-05 14:02:01.378994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.218 qpair failed and we were unable to recover it. 00:30:30.218 [2024-12-05 14:02:01.379084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.219 [2024-12-05 14:02:01.379110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.219 qpair failed and we were unable to recover it. 00:30:30.219 [2024-12-05 14:02:01.379195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.219 [2024-12-05 14:02:01.379221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.219 qpair failed and we were unable to recover it. 00:30:30.219 [2024-12-05 14:02:01.379308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.219 [2024-12-05 14:02:01.379334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.219 qpair failed and we were unable to recover it. 00:30:30.219 [2024-12-05 14:02:01.379413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.219 [2024-12-05 14:02:01.379447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.219 qpair failed and we were unable to recover it. 00:30:30.219 [2024-12-05 14:02:01.379564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.219 [2024-12-05 14:02:01.379591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.219 qpair failed and we were unable to recover it. 00:30:30.219 [2024-12-05 14:02:01.379682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.219 [2024-12-05 14:02:01.379709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.219 qpair failed and we were unable to recover it. 00:30:30.219 [2024-12-05 14:02:01.379820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.219 [2024-12-05 14:02:01.379852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.219 qpair failed and we were unable to recover it. 00:30:30.219 [2024-12-05 14:02:01.379943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.219 [2024-12-05 14:02:01.379969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.219 qpair failed and we were unable to recover it. 00:30:30.219 [2024-12-05 14:02:01.380051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.219 [2024-12-05 14:02:01.380077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.219 qpair failed and we were unable to recover it. 00:30:30.219 [2024-12-05 14:02:01.380156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.219 [2024-12-05 14:02:01.380183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.219 qpair failed and we were unable to recover it. 00:30:30.219 [2024-12-05 14:02:01.380296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.220 [2024-12-05 14:02:01.380323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.220 qpair failed and we were unable to recover it. 00:30:30.220 [2024-12-05 14:02:01.380450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.220 [2024-12-05 14:02:01.380478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.220 qpair failed and we were unable to recover it. 00:30:30.220 [2024-12-05 14:02:01.380566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.220 [2024-12-05 14:02:01.380592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.220 qpair failed and we were unable to recover it. 00:30:30.220 [2024-12-05 14:02:01.380677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.220 [2024-12-05 14:02:01.380704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.220 qpair failed and we were unable to recover it. 00:30:30.220 [2024-12-05 14:02:01.380813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.220 [2024-12-05 14:02:01.380839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.220 qpair failed and we were unable to recover it. 00:30:30.220 [2024-12-05 14:02:01.380916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.221 [2024-12-05 14:02:01.380941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.221 qpair failed and we were unable to recover it. 00:30:30.221 [2024-12-05 14:02:01.381033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.221 [2024-12-05 14:02:01.381073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.221 qpair failed and we were unable to recover it. 00:30:30.221 [2024-12-05 14:02:01.381159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.221 [2024-12-05 14:02:01.381187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.221 qpair failed and we were unable to recover it. 00:30:30.221 [2024-12-05 14:02:01.381322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.222 [2024-12-05 14:02:01.381349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.222 qpair failed and we were unable to recover it. 00:30:30.222 [2024-12-05 14:02:01.381433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.222 [2024-12-05 14:02:01.381461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.222 qpair failed and we were unable to recover it. 00:30:30.222 [2024-12-05 14:02:01.381584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.222 [2024-12-05 14:02:01.381612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.222 qpair failed and we were unable to recover it. 00:30:30.222 [2024-12-05 14:02:01.381761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.222 [2024-12-05 14:02:01.381787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.222 qpair failed and we were unable to recover it. 00:30:30.222 [2024-12-05 14:02:01.381870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.222 [2024-12-05 14:02:01.381896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.222 qpair failed and we were unable to recover it. 00:30:30.222 [2024-12-05 14:02:01.381985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.222 [2024-12-05 14:02:01.382012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.222 qpair failed and we were unable to recover it. 00:30:30.222 [2024-12-05 14:02:01.382125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.222 [2024-12-05 14:02:01.382163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.222 qpair failed and we were unable to recover it. 00:30:30.222 [2024-12-05 14:02:01.382258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.222 [2024-12-05 14:02:01.382286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.222 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.382486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.382512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.382699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.382725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.382813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.382839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.382979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.383027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.383168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.383216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.383319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.383358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.383476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.383504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.383624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.383651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.383732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.383757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.383855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.383880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.383966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.383992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.384080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.384109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.384245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.384285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.384385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.384413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.384548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.384576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.384671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.384696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.384786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.384812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.384901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.384927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.385018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.385046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.385148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.385190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.385343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.385372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.385468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.385496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.385611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.385637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.385724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.385751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.385839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.385866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.385945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.385973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.386090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.386116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.386213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.386240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.386332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.386358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.386453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.386482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.386580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.386606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.386739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.386787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.386939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.386985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.387096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.387147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.387266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.387294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.223 qpair failed and we were unable to recover it. 00:30:30.223 [2024-12-05 14:02:01.387422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.223 [2024-12-05 14:02:01.387450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.387536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.387562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.387653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.387680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.387798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.387825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.387913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.387939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.388054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.388080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.388162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.388189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.388270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.388296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.388393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.388426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.388521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.388548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.388627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.388653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.388765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.388792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.388925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.388957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.389051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.389078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.389172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.389202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.389294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.389321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.389400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.389432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.389519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.389546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.389657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.389683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.389802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.389828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.389914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.389940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.390052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.390091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.390185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.390213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.390299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.390326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.390445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.390472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.390553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.390579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.390678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.390704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.390784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.390810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.390924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.390950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.391035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.391061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.391151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.391179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.391326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.391365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.391506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.391534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.391626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.391653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.391851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.391876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.391993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.392019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.392106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.392132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.392223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.392248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.392362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.392389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.392487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.392515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.392614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.392642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.392754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.392780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.392892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.392918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.393008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.393035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.393128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.393156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.393273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.393300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.393390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.393425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.393521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.393547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.393638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.393664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.393749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.393775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.393893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.393920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.394009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.394036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.394152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-12-05 14:02:01.394183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.224 qpair failed and we were unable to recover it. 00:30:30.224 [2024-12-05 14:02:01.394304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.394343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.394444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.394472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.394559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.394585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.394707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.394734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.394850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.394875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.394977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.395011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.395143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.395170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.395266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.395296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.395413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.395447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.395528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.395554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.395642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.395668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.395781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.395807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.395895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.395922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.396018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.396044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.396131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.396159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.396266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.396292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.396370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.396396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.396488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.396516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.396599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.396625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.396729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.396754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.396836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.396861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.396959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.396986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.397083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.397122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.397216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.397243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.397335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.397361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.397448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.397475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.397564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.397596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.397704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.397730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.397810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.397835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.397954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.397980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.398072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.398099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.398209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.398235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.398317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.398342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.398480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.398507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.398594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.398620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.398699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.398724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.398833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.398858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.398972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.398998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.399107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.399132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.399217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.399244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.399389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.399415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.399503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.399529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.399625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.399650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.399728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.399754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.399848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.399875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.400067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.400114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.400240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.400279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.400379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.400407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.400526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.400552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.400644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.400669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.400795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.400843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.400946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.400980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.401108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.225 [2024-12-05 14:02:01.401134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.225 qpair failed and we were unable to recover it. 00:30:30.225 [2024-12-05 14:02:01.401267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.401306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.401403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.401437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.401534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.401562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.401762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.401787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.401877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.401903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.402042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.402089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.402278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.402303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.402393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.402424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.402511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.402537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.402620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.402645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.402732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.402757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.402843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.402868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.403051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.403077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.403205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.403245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.403344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.403373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.403478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.403506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.403600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.403627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.403751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.403799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.403879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.403905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.404039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.404066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.404160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.404199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.404321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.404348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.404451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.404479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.404564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.404590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.404709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.404735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.404825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.404850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.404941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.404969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.405099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.405129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.405224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.405252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.405365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.405393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.405500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.405527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.405620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.405646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.405787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.405833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.405981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.406015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.406167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.406218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.406314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.406340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.406442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.406468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.406561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.226 [2024-12-05 14:02:01.406589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.226 qpair failed and we were unable to recover it. 00:30:30.226 [2024-12-05 14:02:01.406680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.406707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.406820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.406847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.406944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.406975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.407087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.407135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.407249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.407275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.407361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.407387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.407483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.407508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.407591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.407618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.407712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.407738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.407843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.407869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.407962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.407988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.408103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.408130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.408212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.408238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.408336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.408375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.408476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.408504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.408614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.408640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.408767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.408793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.408887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.408913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.409003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.409029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.409135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.409170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.409289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.409328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.409437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.409476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.409577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.409606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.409726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.409752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.409872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.409899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.409989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.410016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.410125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.410152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.410263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.410289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.410486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.410512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.410591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.410621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.410761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.410787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.410923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.410969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.411070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.411116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.411201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.411227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.411310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.411335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.411438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.411477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.411600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.411627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.411708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.411734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.411825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.411851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.411951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.411990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.412075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.412103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.412189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.412215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.412309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.412335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.412430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.412469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.412561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.412588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.412704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.412731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.412817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.412844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.412924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.412951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.413092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.413119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.413200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.413226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.227 qpair failed and we were unable to recover it. 00:30:30.227 [2024-12-05 14:02:01.413309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.227 [2024-12-05 14:02:01.413336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.413443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.413469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.413680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.413705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.413821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.413869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.414004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.414050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.414132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.414158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.414243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.414273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.414370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.414395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.414496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.414522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.414604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.414629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.414750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.414776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.414858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.414883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.414968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.414992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.415096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.415135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.415252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.415279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.415374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.415400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.415506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.415533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.415617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.415643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.415735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.415761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.415871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.415897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.415987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.416013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.416106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.416131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.416250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.416278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.416391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.416425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.416516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.416546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.416629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.416654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.416762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.416788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.416876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.416902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.417022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.417048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.417129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.417154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.417263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.417288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.417402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.417445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.417531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.417557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.417674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.417703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.417792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.417818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.417899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.417925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.418016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.418043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.418158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.418184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.418267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.418293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.418375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.418401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.418527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.418555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.418657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.418695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.418791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.418820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.418935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.418962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.419058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.419084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.419172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.419198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.419281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.419307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.419404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.419436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.419521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.419548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.419746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.419773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.419862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.419888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.228 qpair failed and we were unable to recover it. 00:30:30.228 [2024-12-05 14:02:01.419994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.228 [2024-12-05 14:02:01.420020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.420107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.420131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.420247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.420272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.420361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.420389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.420481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.420508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.420593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.420620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.420719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.420754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.420856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.420881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.421019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.421065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.421165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.421192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.421300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.421324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.421410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.421445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.421537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.421564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.421652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.421679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.421761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.421789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.421901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.421928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.422026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.422066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.422189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.422216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.422302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.422327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.422408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.422441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.422528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.422552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.422642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.422666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.422741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.422770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.422859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.422884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.422964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.422989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.423106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.423131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.423213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.423239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.423353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.423378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.423477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.423503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.423587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.423613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.423689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.423714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.423798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.423822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.423913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.423937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.424025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.424049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.424146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.424186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.424289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.424319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.424451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.424480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.424576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.424602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.424691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.424717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.424859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.424885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.424974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.425000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.425089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.425115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.425197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.425221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.425305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.425330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.425424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.425449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.425536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.425561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.425669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.425694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.425801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.229 [2024-12-05 14:02:01.425825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.229 qpair failed and we were unable to recover it. 00:30:30.229 [2024-12-05 14:02:01.425943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.425968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.426060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.426089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.426167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.426192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.426271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.426295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.426382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.426410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.426510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.426538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.426639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.426666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.426745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.426772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.426849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.426875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.426961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.426988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.427134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.427160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.427264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.427303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.427409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.427446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.427563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.427591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.427683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.427711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.427824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.427859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.427992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.428019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.428114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.428141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.428261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.428288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.428400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.428435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.428548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.428575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.428706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.428740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.428866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.428917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.429035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.429061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.429175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.429201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.429283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.429309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.429407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.429441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.429522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.429547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.429635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.429664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.429779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.429806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.429895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.429921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.430011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.430036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.430123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.430150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.430265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.430291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.430404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.430437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.430526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.430553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.430633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.430660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.430760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.430797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.430912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.430939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.431065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.431090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.431198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.431222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.431311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.431336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.431437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.431463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.431541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.431566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.431655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.431680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.431765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.431791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.431875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.230 [2024-12-05 14:02:01.431901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.230 qpair failed and we were unable to recover it. 00:30:30.230 [2024-12-05 14:02:01.431984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.432009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.432092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.432117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.432200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.432224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.432334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.432359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.432466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.432491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.432568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.432593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.432689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.432714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.432794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.432818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.432911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.432944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.433029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.433055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.433140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.433166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.433252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.433278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.433386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.433412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.433512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.433539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.433625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.433651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.433735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.433760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.433838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.231 [2024-12-05 14:02:01.433864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.231 qpair failed and we were unable to recover it. 00:30:30.231 [2024-12-05 14:02:01.433948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.232 [2024-12-05 14:02:01.433973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.232 qpair failed and we were unable to recover it. 00:30:30.232 [2024-12-05 14:02:01.434089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.232 [2024-12-05 14:02:01.434114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.232 qpair failed and we were unable to recover it. 00:30:30.232 [2024-12-05 14:02:01.434227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.232 [2024-12-05 14:02:01.434251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.232 qpair failed and we were unable to recover it. 00:30:30.232 [2024-12-05 14:02:01.434330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.232 [2024-12-05 14:02:01.434354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.232 qpair failed and we were unable to recover it. 00:30:30.232 [2024-12-05 14:02:01.434458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.232 [2024-12-05 14:02:01.434497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.232 qpair failed and we were unable to recover it. 00:30:30.232 [2024-12-05 14:02:01.434593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.232 [2024-12-05 14:02:01.434621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.232 qpair failed and we were unable to recover it. 00:30:30.232 [2024-12-05 14:02:01.434767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.232 [2024-12-05 14:02:01.434794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.232 qpair failed and we were unable to recover it. 00:30:30.233 [2024-12-05 14:02:01.434901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.233 [2024-12-05 14:02:01.434927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.233 qpair failed and we were unable to recover it. 00:30:30.233 [2024-12-05 14:02:01.435005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.233 [2024-12-05 14:02:01.435031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.233 qpair failed and we were unable to recover it. 00:30:30.233 [2024-12-05 14:02:01.435119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.233 [2024-12-05 14:02:01.435145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.233 qpair failed and we were unable to recover it. 00:30:30.233 [2024-12-05 14:02:01.435239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.233 [2024-12-05 14:02:01.435265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.233 qpair failed and we were unable to recover it. 00:30:30.233 [2024-12-05 14:02:01.435377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.233 [2024-12-05 14:02:01.435402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.233 qpair failed and we were unable to recover it. 00:30:30.233 [2024-12-05 14:02:01.435507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.233 [2024-12-05 14:02:01.435536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.233 qpair failed and we were unable to recover it. 00:30:30.233 [2024-12-05 14:02:01.435625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.233 [2024-12-05 14:02:01.435652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.233 qpair failed and we were unable to recover it. 00:30:30.233 [2024-12-05 14:02:01.435734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.233 [2024-12-05 14:02:01.435760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.233 qpair failed and we were unable to recover it. 00:30:30.233 [2024-12-05 14:02:01.435874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.233 [2024-12-05 14:02:01.435902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.233 qpair failed and we were unable to recover it. 00:30:30.233 [2024-12-05 14:02:01.435989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.233 [2024-12-05 14:02:01.436015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.233 qpair failed and we were unable to recover it. 00:30:30.233 [2024-12-05 14:02:01.436096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.233 [2024-12-05 14:02:01.436123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.233 qpair failed and we were unable to recover it. 00:30:30.233 [2024-12-05 14:02:01.436216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.233 [2024-12-05 14:02:01.436244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.234 qpair failed and we were unable to recover it. 00:30:30.234 [2024-12-05 14:02:01.436352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.234 [2024-12-05 14:02:01.436377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.234 qpair failed and we were unable to recover it. 00:30:30.234 [2024-12-05 14:02:01.436471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.234 [2024-12-05 14:02:01.436497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.234 qpair failed and we were unable to recover it. 00:30:30.234 [2024-12-05 14:02:01.436600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.234 [2024-12-05 14:02:01.436624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.234 qpair failed and we were unable to recover it. 00:30:30.234 [2024-12-05 14:02:01.436711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.234 [2024-12-05 14:02:01.436737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.234 qpair failed and we were unable to recover it. 00:30:30.234 [2024-12-05 14:02:01.436845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.234 [2024-12-05 14:02:01.436870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.234 qpair failed and we were unable to recover it. 00:30:30.234 [2024-12-05 14:02:01.436963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.234 [2024-12-05 14:02:01.436988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.234 qpair failed and we were unable to recover it. 00:30:30.234 [2024-12-05 14:02:01.437077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.234 [2024-12-05 14:02:01.437102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.234 qpair failed and we were unable to recover it. 00:30:30.234 [2024-12-05 14:02:01.437183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.234 [2024-12-05 14:02:01.437208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.234 qpair failed and we were unable to recover it. 00:30:30.234 [2024-12-05 14:02:01.437292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.234 [2024-12-05 14:02:01.437320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.234 qpair failed and we were unable to recover it. 00:30:30.235 [2024-12-05 14:02:01.437433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.235 [2024-12-05 14:02:01.437460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.235 qpair failed and we were unable to recover it. 00:30:30.235 [2024-12-05 14:02:01.437546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.235 [2024-12-05 14:02:01.437572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.235 qpair failed and we were unable to recover it. 00:30:30.235 [2024-12-05 14:02:01.437685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.235 [2024-12-05 14:02:01.437711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.235 qpair failed and we were unable to recover it. 00:30:30.235 [2024-12-05 14:02:01.437823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.235 [2024-12-05 14:02:01.437849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.235 qpair failed and we were unable to recover it. 00:30:30.235 [2024-12-05 14:02:01.437991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.235 [2024-12-05 14:02:01.438017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.235 qpair failed and we were unable to recover it. 00:30:30.235 [2024-12-05 14:02:01.438135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.235 [2024-12-05 14:02:01.438161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.235 qpair failed and we were unable to recover it. 00:30:30.235 [2024-12-05 14:02:01.438299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.235 [2024-12-05 14:02:01.438338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.235 qpair failed and we were unable to recover it. 00:30:30.235 [2024-12-05 14:02:01.438442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.235 [2024-12-05 14:02:01.438471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.235 qpair failed and we were unable to recover it. 00:30:30.235 [2024-12-05 14:02:01.438569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.235 [2024-12-05 14:02:01.438595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.235 qpair failed and we were unable to recover it. 00:30:30.235 [2024-12-05 14:02:01.438683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.235 [2024-12-05 14:02:01.438708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.235 qpair failed and we were unable to recover it. 00:30:30.236 [2024-12-05 14:02:01.438812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.236 [2024-12-05 14:02:01.438838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.236 qpair failed and we were unable to recover it. 00:30:30.236 [2024-12-05 14:02:01.438949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.236 [2024-12-05 14:02:01.438995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.236 qpair failed and we were unable to recover it. 00:30:30.236 [2024-12-05 14:02:01.439088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.236 [2024-12-05 14:02:01.439113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.236 qpair failed and we were unable to recover it. 00:30:30.236 [2024-12-05 14:02:01.439217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.236 [2024-12-05 14:02:01.439242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.236 qpair failed and we were unable to recover it. 00:30:30.236 [2024-12-05 14:02:01.439358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.236 [2024-12-05 14:02:01.439384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.236 qpair failed and we were unable to recover it. 00:30:30.236 [2024-12-05 14:02:01.439489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.236 [2024-12-05 14:02:01.439514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.236 qpair failed and we were unable to recover it. 00:30:30.236 [2024-12-05 14:02:01.439618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.236 [2024-12-05 14:02:01.439643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.236 qpair failed and we were unable to recover it. 00:30:30.236 [2024-12-05 14:02:01.439757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.236 [2024-12-05 14:02:01.439795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.236 qpair failed and we were unable to recover it. 00:30:30.236 [2024-12-05 14:02:01.439944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.236 [2024-12-05 14:02:01.439971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.236 qpair failed and we were unable to recover it. 00:30:30.236 [2024-12-05 14:02:01.440056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.237 [2024-12-05 14:02:01.440081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.237 qpair failed and we were unable to recover it. 00:30:30.237 [2024-12-05 14:02:01.440200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.237 [2024-12-05 14:02:01.440225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.237 qpair failed and we were unable to recover it. 00:30:30.237 [2024-12-05 14:02:01.440338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.237 [2024-12-05 14:02:01.440363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.237 qpair failed and we were unable to recover it. 00:30:30.237 [2024-12-05 14:02:01.440453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.237 [2024-12-05 14:02:01.440479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.237 qpair failed and we were unable to recover it. 00:30:30.237 [2024-12-05 14:02:01.440561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.237 [2024-12-05 14:02:01.440588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.237 qpair failed and we were unable to recover it. 00:30:30.237 [2024-12-05 14:02:01.440668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.237 [2024-12-05 14:02:01.440695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.237 qpair failed and we were unable to recover it. 00:30:30.237 [2024-12-05 14:02:01.440787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.237 [2024-12-05 14:02:01.440813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.237 qpair failed and we were unable to recover it. 00:30:30.237 [2024-12-05 14:02:01.440914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.237 [2024-12-05 14:02:01.440941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.237 qpair failed and we were unable to recover it. 00:30:30.237 [2024-12-05 14:02:01.441032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.237 [2024-12-05 14:02:01.441057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.237 qpair failed and we were unable to recover it. 00:30:30.237 [2024-12-05 14:02:01.441132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.237 [2024-12-05 14:02:01.441157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.237 qpair failed and we were unable to recover it. 00:30:30.237 [2024-12-05 14:02:01.441236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.237 [2024-12-05 14:02:01.441261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.237 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.441362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.441408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.441523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.441552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.441645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.441672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.441757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.441783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.441874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.441901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.441993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.442020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.442104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.442130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.442216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.442241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.442318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.442344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.442424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.442449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.442534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.442559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.442693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.442739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.442848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.442873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.442988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.443013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.443106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.443135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.443228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.238 [2024-12-05 14:02:01.443256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.238 qpair failed and we were unable to recover it. 00:30:30.238 [2024-12-05 14:02:01.443338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.239 [2024-12-05 14:02:01.443364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.239 qpair failed and we were unable to recover it. 00:30:30.239 [2024-12-05 14:02:01.443474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.239 [2024-12-05 14:02:01.443500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.239 qpair failed and we were unable to recover it. 00:30:30.239 [2024-12-05 14:02:01.443582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.239 [2024-12-05 14:02:01.443608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.239 qpair failed and we were unable to recover it. 00:30:30.239 [2024-12-05 14:02:01.443707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.239 [2024-12-05 14:02:01.443733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.239 qpair failed and we were unable to recover it. 00:30:30.239 [2024-12-05 14:02:01.443817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.239 [2024-12-05 14:02:01.443843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.239 qpair failed and we were unable to recover it. 00:30:30.239 [2024-12-05 14:02:01.443960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.239 [2024-12-05 14:02:01.443988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.239 qpair failed and we were unable to recover it. 00:30:30.240 [2024-12-05 14:02:01.444073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.240 [2024-12-05 14:02:01.444100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.240 qpair failed and we were unable to recover it. 00:30:30.240 [2024-12-05 14:02:01.444243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.240 [2024-12-05 14:02:01.444269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.240 qpair failed and we were unable to recover it. 00:30:30.240 [2024-12-05 14:02:01.444394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.240 [2024-12-05 14:02:01.444427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.240 qpair failed and we were unable to recover it. 00:30:30.240 [2024-12-05 14:02:01.444543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.240 [2024-12-05 14:02:01.444570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.240 qpair failed and we were unable to recover it. 00:30:30.240 [2024-12-05 14:02:01.444690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.240 [2024-12-05 14:02:01.444717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.240 qpair failed and we were unable to recover it. 00:30:30.240 [2024-12-05 14:02:01.444845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.240 [2024-12-05 14:02:01.444897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.240 qpair failed and we were unable to recover it. 00:30:30.240 [2024-12-05 14:02:01.445009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.240 [2024-12-05 14:02:01.445057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.240 qpair failed and we were unable to recover it. 00:30:30.240 [2024-12-05 14:02:01.445143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.240 [2024-12-05 14:02:01.445168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.241 qpair failed and we were unable to recover it. 00:30:30.241 [2024-12-05 14:02:01.445307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.241 [2024-12-05 14:02:01.445346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.241 qpair failed and we were unable to recover it. 00:30:30.241 [2024-12-05 14:02:01.445450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.241 [2024-12-05 14:02:01.445477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.241 qpair failed and we were unable to recover it. 00:30:30.241 [2024-12-05 14:02:01.445566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.241 [2024-12-05 14:02:01.445592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.241 qpair failed and we were unable to recover it. 00:30:30.241 [2024-12-05 14:02:01.445734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.241 [2024-12-05 14:02:01.445766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.241 qpair failed and we were unable to recover it. 00:30:30.241 [2024-12-05 14:02:01.445863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.241 [2024-12-05 14:02:01.445888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.241 qpair failed and we were unable to recover it. 00:30:30.241 [2024-12-05 14:02:01.445994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.241 [2024-12-05 14:02:01.446042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.241 qpair failed and we were unable to recover it. 00:30:30.241 [2024-12-05 14:02:01.446129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.241 [2024-12-05 14:02:01.446153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.241 qpair failed and we were unable to recover it. 00:30:30.241 [2024-12-05 14:02:01.446242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.241 [2024-12-05 14:02:01.446266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.241 qpair failed and we were unable to recover it. 00:30:30.241 [2024-12-05 14:02:01.446350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.241 [2024-12-05 14:02:01.446375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.241 qpair failed and we were unable to recover it. 00:30:30.241 [2024-12-05 14:02:01.446466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.241 [2024-12-05 14:02:01.446492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.241 qpair failed and we were unable to recover it. 00:30:30.241 [2024-12-05 14:02:01.446576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.241 [2024-12-05 14:02:01.446601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.241 qpair failed and we were unable to recover it. 00:30:30.242 [2024-12-05 14:02:01.446698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.242 [2024-12-05 14:02:01.446726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.242 qpair failed and we were unable to recover it. 00:30:30.242 [2024-12-05 14:02:01.446846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.242 [2024-12-05 14:02:01.446872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.242 qpair failed and we were unable to recover it. 00:30:30.242 [2024-12-05 14:02:01.446955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.242 [2024-12-05 14:02:01.446981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.242 qpair failed and we were unable to recover it. 00:30:30.242 [2024-12-05 14:02:01.447073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.242 [2024-12-05 14:02:01.447099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.242 qpair failed and we were unable to recover it. 00:30:30.242 [2024-12-05 14:02:01.447183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.242 [2024-12-05 14:02:01.447211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.242 qpair failed and we were unable to recover it. 00:30:30.242 [2024-12-05 14:02:01.447323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.242 [2024-12-05 14:02:01.447349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.242 qpair failed and we were unable to recover it. 00:30:30.242 [2024-12-05 14:02:01.447440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.242 [2024-12-05 14:02:01.447467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.242 qpair failed and we were unable to recover it. 00:30:30.243 [2024-12-05 14:02:01.447544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.243 [2024-12-05 14:02:01.447568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.243 qpair failed and we were unable to recover it. 00:30:30.244 [2024-12-05 14:02:01.447650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.244 [2024-12-05 14:02:01.447676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.244 qpair failed and we were unable to recover it. 00:30:30.244 [2024-12-05 14:02:01.447789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.244 [2024-12-05 14:02:01.447814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.244 qpair failed and we were unable to recover it. 00:30:30.244 [2024-12-05 14:02:01.447903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.244 [2024-12-05 14:02:01.447930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.244 qpair failed and we were unable to recover it. 00:30:30.244 [2024-12-05 14:02:01.448063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.244 [2024-12-05 14:02:01.448089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.244 qpair failed and we were unable to recover it. 00:30:30.244 [2024-12-05 14:02:01.448176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.244 [2024-12-05 14:02:01.448202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.244 qpair failed and we were unable to recover it. 00:30:30.244 [2024-12-05 14:02:01.448317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.244 [2024-12-05 14:02:01.448348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.244 qpair failed and we were unable to recover it. 00:30:30.244 [2024-12-05 14:02:01.448438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.244 [2024-12-05 14:02:01.448466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.244 qpair failed and we were unable to recover it. 00:30:30.244 [2024-12-05 14:02:01.448552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.244 [2024-12-05 14:02:01.448579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.244 qpair failed and we were unable to recover it. 00:30:30.244 [2024-12-05 14:02:01.448677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.244 [2024-12-05 14:02:01.448703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.244 qpair failed and we were unable to recover it. 00:30:30.244 [2024-12-05 14:02:01.448850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.244 [2024-12-05 14:02:01.448876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.244 qpair failed and we were unable to recover it. 00:30:30.245 [2024-12-05 14:02:01.448965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.245 [2024-12-05 14:02:01.448992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.245 qpair failed and we were unable to recover it. 00:30:30.245 [2024-12-05 14:02:01.449105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.245 [2024-12-05 14:02:01.449132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.245 qpair failed and we were unable to recover it. 00:30:30.245 [2024-12-05 14:02:01.449237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.245 [2024-12-05 14:02:01.449261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.245 qpair failed and we were unable to recover it. 00:30:30.245 [2024-12-05 14:02:01.449345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.245 [2024-12-05 14:02:01.449373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.245 qpair failed and we were unable to recover it. 00:30:30.245 [2024-12-05 14:02:01.449474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.245 [2024-12-05 14:02:01.449501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.245 qpair failed and we were unable to recover it. 00:30:30.245 [2024-12-05 14:02:01.449589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.245 [2024-12-05 14:02:01.449615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.245 qpair failed and we were unable to recover it. 00:30:30.245 [2024-12-05 14:02:01.449697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.246 [2024-12-05 14:02:01.449722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.246 qpair failed and we were unable to recover it. 00:30:30.246 [2024-12-05 14:02:01.449806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.246 [2024-12-05 14:02:01.449832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.246 qpair failed and we were unable to recover it. 00:30:30.246 [2024-12-05 14:02:01.449905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.246 [2024-12-05 14:02:01.449930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.246 qpair failed and we were unable to recover it. 00:30:30.246 [2024-12-05 14:02:01.450048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.246 [2024-12-05 14:02:01.450074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.246 qpair failed and we were unable to recover it. 00:30:30.246 [2024-12-05 14:02:01.450194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.246 [2024-12-05 14:02:01.450220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.246 qpair failed and we were unable to recover it. 00:30:30.246 [2024-12-05 14:02:01.450345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.246 [2024-12-05 14:02:01.450370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.246 qpair failed and we were unable to recover it. 00:30:30.246 [2024-12-05 14:02:01.450467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.246 [2024-12-05 14:02:01.450494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.247 qpair failed and we were unable to recover it. 00:30:30.247 [2024-12-05 14:02:01.450584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.247 [2024-12-05 14:02:01.450609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.247 qpair failed and we were unable to recover it. 00:30:30.247 [2024-12-05 14:02:01.450697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.247 [2024-12-05 14:02:01.450722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.247 qpair failed and we were unable to recover it. 00:30:30.247 [2024-12-05 14:02:01.450801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.247 [2024-12-05 14:02:01.450826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.247 qpair failed and we were unable to recover it. 00:30:30.247 [2024-12-05 14:02:01.450926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.247 [2024-12-05 14:02:01.450961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.247 qpair failed and we were unable to recover it. 00:30:30.247 [2024-12-05 14:02:01.451067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.247 [2024-12-05 14:02:01.451094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.247 qpair failed and we were unable to recover it. 00:30:30.247 [2024-12-05 14:02:01.451184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.247 [2024-12-05 14:02:01.451210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.247 qpair failed and we were unable to recover it. 00:30:30.247 [2024-12-05 14:02:01.451288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.247 [2024-12-05 14:02:01.451314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.247 qpair failed and we were unable to recover it. 00:30:30.247 [2024-12-05 14:02:01.451391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.247 [2024-12-05 14:02:01.451425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.247 qpair failed and we were unable to recover it. 00:30:30.247 [2024-12-05 14:02:01.451522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.247 [2024-12-05 14:02:01.451548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.248 qpair failed and we were unable to recover it. 00:30:30.248 [2024-12-05 14:02:01.451637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.248 [2024-12-05 14:02:01.451664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.248 qpair failed and we were unable to recover it. 00:30:30.248 [2024-12-05 14:02:01.451754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.248 [2024-12-05 14:02:01.451793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.248 qpair failed and we were unable to recover it. 00:30:30.248 [2024-12-05 14:02:01.451893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.248 [2024-12-05 14:02:01.451920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.248 qpair failed and we were unable to recover it. 00:30:30.248 [2024-12-05 14:02:01.452006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.248 [2024-12-05 14:02:01.452033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.248 qpair failed and we were unable to recover it. 00:30:30.248 [2024-12-05 14:02:01.452128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.248 [2024-12-05 14:02:01.452154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.248 qpair failed and we were unable to recover it. 00:30:30.248 [2024-12-05 14:02:01.452235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.248 [2024-12-05 14:02:01.452262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-12-05 14:02:01.452353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.249 [2024-12-05 14:02:01.452381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-12-05 14:02:01.452477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.249 [2024-12-05 14:02:01.452503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-12-05 14:02:01.452621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.249 [2024-12-05 14:02:01.452645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-12-05 14:02:01.452737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.249 [2024-12-05 14:02:01.452762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-12-05 14:02:01.452852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.249 [2024-12-05 14:02:01.452876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-12-05 14:02:01.452962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.249 [2024-12-05 14:02:01.452985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-12-05 14:02:01.453093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.249 [2024-12-05 14:02:01.453116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.250 [2024-12-05 14:02:01.453205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.250 [2024-12-05 14:02:01.453236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-12-05 14:02:01.453387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.250 [2024-12-05 14:02:01.453412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-12-05 14:02:01.453530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.250 [2024-12-05 14:02:01.453555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-12-05 14:02:01.453635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.250 [2024-12-05 14:02:01.453660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-12-05 14:02:01.453777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.250 [2024-12-05 14:02:01.453802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-12-05 14:02:01.453924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.250 [2024-12-05 14:02:01.453949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-12-05 14:02:01.454036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.250 [2024-12-05 14:02:01.454084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.251 [2024-12-05 14:02:01.454193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.251 [2024-12-05 14:02:01.454217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.251 qpair failed and we were unable to recover it. 00:30:30.251 [2024-12-05 14:02:01.454335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.251 [2024-12-05 14:02:01.454359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.251 qpair failed and we were unable to recover it. 00:30:30.251 [2024-12-05 14:02:01.454470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.251 [2024-12-05 14:02:01.454494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.251 qpair failed and we were unable to recover it. 00:30:30.251 [2024-12-05 14:02:01.454576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.251 [2024-12-05 14:02:01.454600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.251 qpair failed and we were unable to recover it. 00:30:30.251 [2024-12-05 14:02:01.454688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.252 [2024-12-05 14:02:01.454712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.252 qpair failed and we were unable to recover it. 00:30:30.252 [2024-12-05 14:02:01.454788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.252 [2024-12-05 14:02:01.454812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.252 qpair failed and we were unable to recover it. 00:30:30.252 [2024-12-05 14:02:01.454926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.253 [2024-12-05 14:02:01.454950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.253 qpair failed and we were unable to recover it. 00:30:30.253 [2024-12-05 14:02:01.455037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.253 [2024-12-05 14:02:01.455064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.253 qpair failed and we were unable to recover it. 00:30:30.253 [2024-12-05 14:02:01.455148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.253 [2024-12-05 14:02:01.455175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.253 qpair failed and we were unable to recover it. 00:30:30.253 [2024-12-05 14:02:01.455256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.253 [2024-12-05 14:02:01.455281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.253 qpair failed and we were unable to recover it. 00:30:30.253 [2024-12-05 14:02:01.455362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.253 [2024-12-05 14:02:01.455388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.253 qpair failed and we were unable to recover it. 00:30:30.253 [2024-12-05 14:02:01.455513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.253 [2024-12-05 14:02:01.455538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.253 qpair failed and we were unable to recover it. 00:30:30.253 [2024-12-05 14:02:01.455665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.253 [2024-12-05 14:02:01.455704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.253 qpair failed and we were unable to recover it. 00:30:30.253 [2024-12-05 14:02:01.455820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.253 [2024-12-05 14:02:01.455847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.253 qpair failed and we were unable to recover it. 00:30:30.253 [2024-12-05 14:02:01.455940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.253 [2024-12-05 14:02:01.455966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.253 qpair failed and we were unable to recover it. 00:30:30.253 [2024-12-05 14:02:01.456046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.253 [2024-12-05 14:02:01.456072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.253 qpair failed and we were unable to recover it. 00:30:30.254 [2024-12-05 14:02:01.456164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.254 [2024-12-05 14:02:01.456189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.254 qpair failed and we were unable to recover it. 00:30:30.254 [2024-12-05 14:02:01.456269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.254 [2024-12-05 14:02:01.456294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.254 qpair failed and we were unable to recover it. 00:30:30.254 [2024-12-05 14:02:01.456386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.254 [2024-12-05 14:02:01.456413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.254 qpair failed and we were unable to recover it. 00:30:30.254 [2024-12-05 14:02:01.456558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.254 [2024-12-05 14:02:01.456588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.254 qpair failed and we were unable to recover it. 00:30:30.254 [2024-12-05 14:02:01.456674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.254 [2024-12-05 14:02:01.456707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.254 qpair failed and we were unable to recover it. 00:30:30.254 [2024-12-05 14:02:01.456798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.254 [2024-12-05 14:02:01.456846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.254 qpair failed and we were unable to recover it. 00:30:30.254 [2024-12-05 14:02:01.456955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.254 [2024-12-05 14:02:01.456988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.254 qpair failed and we were unable to recover it. 00:30:30.254 [2024-12-05 14:02:01.457108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.255 [2024-12-05 14:02:01.457170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.255 qpair failed and we were unable to recover it. 00:30:30.255 [2024-12-05 14:02:01.457283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.255 [2024-12-05 14:02:01.457310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.255 qpair failed and we were unable to recover it. 00:30:30.255 [2024-12-05 14:02:01.457404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.255 [2024-12-05 14:02:01.457437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.255 qpair failed and we were unable to recover it. 00:30:30.255 [2024-12-05 14:02:01.457521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.255 [2024-12-05 14:02:01.457547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.255 qpair failed and we were unable to recover it. 00:30:30.255 [2024-12-05 14:02:01.457646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.255 [2024-12-05 14:02:01.457670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.255 qpair failed and we were unable to recover it. 00:30:30.255 [2024-12-05 14:02:01.457747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.255 [2024-12-05 14:02:01.457772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.255 qpair failed and we were unable to recover it. 00:30:30.255 [2024-12-05 14:02:01.457852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.255 [2024-12-05 14:02:01.457877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.255 qpair failed and we were unable to recover it. 00:30:30.255 [2024-12-05 14:02:01.457989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.255 [2024-12-05 14:02:01.458013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.255 qpair failed and we were unable to recover it. 00:30:30.255 [2024-12-05 14:02:01.458120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.255 [2024-12-05 14:02:01.458144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.255 qpair failed and we were unable to recover it. 00:30:30.255 [2024-12-05 14:02:01.458222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.255 [2024-12-05 14:02:01.458248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.255 qpair failed and we were unable to recover it. 00:30:30.255 [2024-12-05 14:02:01.458340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.255 [2024-12-05 14:02:01.458366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.255 qpair failed and we were unable to recover it. 00:30:30.255 [2024-12-05 14:02:01.458457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.256 [2024-12-05 14:02:01.458486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.256 qpair failed and we were unable to recover it. 00:30:30.256 [2024-12-05 14:02:01.458580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.256 [2024-12-05 14:02:01.458607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.256 qpair failed and we were unable to recover it. 00:30:30.256 [2024-12-05 14:02:01.458696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.256 [2024-12-05 14:02:01.458722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.256 qpair failed and we were unable to recover it. 00:30:30.256 [2024-12-05 14:02:01.458801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.256 [2024-12-05 14:02:01.458827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.256 qpair failed and we were unable to recover it. 00:30:30.256 [2024-12-05 14:02:01.458943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.256 [2024-12-05 14:02:01.458969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.256 qpair failed and we were unable to recover it. 00:30:30.256 [2024-12-05 14:02:01.459049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.256 [2024-12-05 14:02:01.459077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.256 qpair failed and we were unable to recover it. 00:30:30.256 [2024-12-05 14:02:01.459157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.256 [2024-12-05 14:02:01.459183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.256 qpair failed and we were unable to recover it. 00:30:30.256 [2024-12-05 14:02:01.459305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.256 [2024-12-05 14:02:01.459333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.256 qpair failed and we were unable to recover it. 00:30:30.257 [2024-12-05 14:02:01.459457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.257 [2024-12-05 14:02:01.459485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.257 qpair failed and we were unable to recover it. 00:30:30.257 [2024-12-05 14:02:01.459577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.257 [2024-12-05 14:02:01.459603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.257 qpair failed and we were unable to recover it. 00:30:30.257 [2024-12-05 14:02:01.459712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.257 [2024-12-05 14:02:01.459739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.257 qpair failed and we were unable to recover it. 00:30:30.257 [2024-12-05 14:02:01.459825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.257 [2024-12-05 14:02:01.459852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.257 qpair failed and we were unable to recover it. 00:30:30.257 [2024-12-05 14:02:01.459967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.257 [2024-12-05 14:02:01.459993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.257 qpair failed and we were unable to recover it. 00:30:30.257 [2024-12-05 14:02:01.460080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.257 [2024-12-05 14:02:01.460112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.257 qpair failed and we were unable to recover it. 00:30:30.258 [2024-12-05 14:02:01.460225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.258 [2024-12-05 14:02:01.460264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.258 qpair failed and we were unable to recover it. 00:30:30.258 [2024-12-05 14:02:01.460385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.258 [2024-12-05 14:02:01.460412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.258 qpair failed and we were unable to recover it. 00:30:30.258 [2024-12-05 14:02:01.460537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.258 [2024-12-05 14:02:01.460563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.258 qpair failed and we were unable to recover it. 00:30:30.258 [2024-12-05 14:02:01.460643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.258 [2024-12-05 14:02:01.460669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.258 qpair failed and we were unable to recover it. 00:30:30.258 [2024-12-05 14:02:01.460753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.258 [2024-12-05 14:02:01.460779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.258 qpair failed and we were unable to recover it. 00:30:30.258 [2024-12-05 14:02:01.460883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.258 [2024-12-05 14:02:01.460909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.259 qpair failed and we were unable to recover it. 00:30:30.259 [2024-12-05 14:02:01.460998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.259 [2024-12-05 14:02:01.461025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.259 qpair failed and we were unable to recover it. 00:30:30.259 [2024-12-05 14:02:01.461118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.259 [2024-12-05 14:02:01.461145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.259 qpair failed and we were unable to recover it. 00:30:30.259 [2024-12-05 14:02:01.461235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.259 [2024-12-05 14:02:01.461262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.259 qpair failed and we were unable to recover it. 00:30:30.259 [2024-12-05 14:02:01.461398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.259 [2024-12-05 14:02:01.461433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.259 qpair failed and we were unable to recover it. 00:30:30.259 [2024-12-05 14:02:01.461530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.259 [2024-12-05 14:02:01.461556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.259 qpair failed and we were unable to recover it. 00:30:30.259 [2024-12-05 14:02:01.461649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.259 [2024-12-05 14:02:01.461675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.259 qpair failed and we were unable to recover it. 00:30:30.259 [2024-12-05 14:02:01.461785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.259 [2024-12-05 14:02:01.461811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.259 qpair failed and we were unable to recover it. 00:30:30.260 [2024-12-05 14:02:01.461909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.260 [2024-12-05 14:02:01.461935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.260 qpair failed and we were unable to recover it. 00:30:30.260 [2024-12-05 14:02:01.462046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.260 [2024-12-05 14:02:01.462107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.260 qpair failed and we were unable to recover it. 00:30:30.260 [2024-12-05 14:02:01.462227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.260 [2024-12-05 14:02:01.462255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.260 qpair failed and we were unable to recover it. 00:30:30.260 [2024-12-05 14:02:01.462370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.260 [2024-12-05 14:02:01.462396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.260 qpair failed and we were unable to recover it. 00:30:30.260 [2024-12-05 14:02:01.462513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.260 [2024-12-05 14:02:01.462540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.260 qpair failed and we were unable to recover it. 00:30:30.260 [2024-12-05 14:02:01.462631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.260 [2024-12-05 14:02:01.462660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.260 qpair failed and we were unable to recover it. 00:30:30.260 [2024-12-05 14:02:01.462749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.260 [2024-12-05 14:02:01.462778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.260 qpair failed and we were unable to recover it. 00:30:30.260 [2024-12-05 14:02:01.462918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.260 [2024-12-05 14:02:01.462964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.260 qpair failed and we were unable to recover it. 00:30:30.260 [2024-12-05 14:02:01.463046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.260 [2024-12-05 14:02:01.463073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.260 qpair failed and we were unable to recover it. 00:30:30.260 [2024-12-05 14:02:01.463189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.260 [2024-12-05 14:02:01.463215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.260 qpair failed and we were unable to recover it. 00:30:30.260 [2024-12-05 14:02:01.463322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.260 [2024-12-05 14:02:01.463361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.260 qpair failed and we were unable to recover it. 00:30:30.260 [2024-12-05 14:02:01.463481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.260 [2024-12-05 14:02:01.463510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.260 qpair failed and we were unable to recover it. 00:30:30.260 [2024-12-05 14:02:01.463597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.463624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.463739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.463765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.463878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.463910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.464050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.464081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.464217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.464243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.464341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.464371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.464467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.464495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.464571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.464597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.464728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.464773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.464855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.464880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.464978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.465011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.465207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.465233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.465325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.465363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.465469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.465497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.465576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.465609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.465734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.465760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.465877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.465911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.466016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.466047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.466193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.466219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.466300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.466326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.466406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.466442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.466565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.466591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.466681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.466709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.466835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.466881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.467023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.467054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.467167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.467215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.467331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.467359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.467453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.467480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.467591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.467618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.467707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.467732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.467815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.467839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.467928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.467953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.468057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.468082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.468177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.468216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.468310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.468338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.468427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.468455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.468572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.468598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.468692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.468718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.468801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.468826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.468939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.468966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.469051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.469080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.469200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.469233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.469318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.469345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.261 qpair failed and we were unable to recover it. 00:30:30.261 [2024-12-05 14:02:01.469431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.261 [2024-12-05 14:02:01.469456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.469534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.469559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.469640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.469665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.469776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.469801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.469886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.469917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.470007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.470033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.470149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.470177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.470293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.470318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.470440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.470469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.470562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.470589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.470685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.470712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.470842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.470886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.471033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.471065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.471166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.471198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.471337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.471364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.471446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.471471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.471553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.471579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.471651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.471677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.471754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.471780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.471860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.471887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.471981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.472010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.472101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.472127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.472209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.472237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.472325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.472352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.472461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.472488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.472582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.472608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.472690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.472716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.472824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.472850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.472969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.472996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.473118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.473145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.473258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.473283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.473407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.473442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.473531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.473557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.473659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.473685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.473779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.473805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.473884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.473910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.473993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.474020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.474134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.474160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.474263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.474309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.474423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.474463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.474583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.474612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.474732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.474759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.474875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.474900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.475008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.475034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.475126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.475153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.475248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.475287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.475392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.475441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.475529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.262 [2024-12-05 14:02:01.475556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.262 qpair failed and we were unable to recover it. 00:30:30.262 [2024-12-05 14:02:01.475635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.475661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.475800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.475846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.475947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.475979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.476106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.476156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.476247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.476273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.476382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.476408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.476512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.476538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.476629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.476655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.476736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.476763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.476853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.476879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.476995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.477020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.477097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.477122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.477201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.477228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.477319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.477345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.477439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.477466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.477547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.477573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.477653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.477678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.477803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.477842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.477940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.477979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.478080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.478106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.478196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.478222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.478311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.478336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.478453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.478479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.478562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.478589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.478674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.478699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.478780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.478805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.478918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.478943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.479025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.479051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.479164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.479189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.479315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.479340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.479436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.479464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.479555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.479581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.479665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.479691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.479813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.479839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.479912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.479938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.480023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.480050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.480138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.480163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.480272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.480311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.480430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.480457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.480540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.480566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.480649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.480674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.480750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.480775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.480891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.480917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.263 [2024-12-05 14:02:01.481025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.263 [2024-12-05 14:02:01.481051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.263 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.481143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.481169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.481271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.481310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.481404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.481437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.481532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.481558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.481670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.481695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.481776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.481802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.481888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.481914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.482056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.482082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.482204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.482233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.482360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.482399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.482500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.482528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.482619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.482645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.482724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.482750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.482876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.482925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.483043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.265 [2024-12-05 14:02:01.483068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.265 qpair failed and we were unable to recover it. 00:30:30.265 [2024-12-05 14:02:01.483177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.483202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.483339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.483365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.483450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.483477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.483560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.483586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.483669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.483694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.483782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.483808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.483966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.484015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.484129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.484155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.484265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.484290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.484375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.484401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.484510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.484541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.484657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.484685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.484815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.484841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.484946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.484972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.485084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.485110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.485209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.485248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.485345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.485372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.485494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.485522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.485609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.485635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.485732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.485766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.485859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.485884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.486038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.486072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.486236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.486268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.486382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.486409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.486616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.486642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.486770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.486818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.486937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.486985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.487101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.487127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.487243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.487268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.487356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.487382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.487480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.487507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.487600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.487626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.487710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.487735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.487856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.487881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.487958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.487984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.488068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.488095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.488198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.488238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.488335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.488363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.488453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.488480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.488585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.488611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.488724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.488788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.488896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.488930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.489050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.489083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.489218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.489244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.489357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.489384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.266 qpair failed and we were unable to recover it. 00:30:30.266 [2024-12-05 14:02:01.489484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.266 [2024-12-05 14:02:01.489513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.489624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.489650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.489732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.489759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.489855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.489881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.489996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.490022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.490097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.490123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.490209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.490235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.490335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.490363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.490478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.490504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.490615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.490642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.490737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.490762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.490884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.490909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.490990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.491016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.491114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.491140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.491230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.491255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.491349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.491377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.491504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.491530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.491617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.491643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.491722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.491748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.491862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.491888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.491983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.492013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.492096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.492122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.492211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.492236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.492324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.492350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.492437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.492464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.492575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.492602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.492707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.492733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.492820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.492847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.492964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.492989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.493068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.493094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.493192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.493220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.493329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.493364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.493477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.493516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.493619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.493646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.493765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.493790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.493888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.493914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.494004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.267 [2024-12-05 14:02:01.494031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.267 qpair failed and we were unable to recover it. 00:30:30.267 [2024-12-05 14:02:01.494178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.494207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.494296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.494322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.494451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.494490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.494609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.494637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.494780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.494815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.494958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.494991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.495106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.495133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.495215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.495241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.495330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.495357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.495434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.495461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.495546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.495578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.495689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.495714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.495825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.495851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.495939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.495966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.496051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.496077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.496168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.496195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.496313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.496340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.496428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.496455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.496554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.496580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.496661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.496689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.496785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.496811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.496893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.496919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.496998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.497024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.497140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.497165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.497259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.497285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.497403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.497441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.497531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.497557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.497652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.497678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.497786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.497812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.497916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.497942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.498029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.498056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.498164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.498189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.498316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.498355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.498451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.498480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.498570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.498598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.498707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.498734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.498842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.498869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.498971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.498998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.499115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.499141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.499250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.499276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.499355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.499381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.499501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.499528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.499639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.499664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.499743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.499769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.499873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.499899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.500005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.500030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.500116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.500142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.500254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.500282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.500403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.268 [2024-12-05 14:02:01.500449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.268 qpair failed and we were unable to recover it. 00:30:30.268 [2024-12-05 14:02:01.500548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.500576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.500671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.500705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.500786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.500812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.500985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.501018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.501147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.501179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.501292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.501318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.501435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.501463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.501581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.501607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.501703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.501732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.501845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.501893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.502029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.502076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.502191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.502217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.502326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.502351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.502440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.502467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.502550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.502576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.502665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.502690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.502805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.502831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.502915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.502941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.503058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.503084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.503162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.503188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.503302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.503328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.503412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.503444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.503535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.503563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.503643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.503669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.503775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.503801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.503891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.503919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.504032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.504059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.504141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.504168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.504252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.504279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.504359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.504385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.504478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.504506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.504601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.504628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.504717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.504743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.504822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.504849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.504966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.504993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.505083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.505109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.505242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.505268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.505357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.505382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.505475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.505502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.505614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.505640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.505725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.505753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.505860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.505891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.506011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.506037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.506122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.506148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.269 qpair failed and we were unable to recover it. 00:30:30.269 [2024-12-05 14:02:01.506238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.269 [2024-12-05 14:02:01.506266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.506346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.506372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.506518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.506546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.506632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.506659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.506780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.506805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.506915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.506940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.507039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.507078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.507199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.507227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.507348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.507375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.507495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.507523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.507642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.507668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.507760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.507786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.507893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.507919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.508021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.508047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.508136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.508162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.508255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.508282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.508393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.508426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.508540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.508567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.508649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.508675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.508765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.508792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.508882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.508909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.508993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.509021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.509131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.509157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.509264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.509304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.509435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.509464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.509548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.509575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.509662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.509687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.509804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.509835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.509960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.509991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.510113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.510144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.510264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.510292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.510413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.510447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.510548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.510581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.510682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.510729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.510830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.510856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.510935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.510962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.511052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.511079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.511165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.511196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.511289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.511315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.511401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.511438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.511533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.511560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.511651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.511698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.511827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.511858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.511953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.511985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.512102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.512151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.512318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.512345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.512433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.512461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.512560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.512586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.512691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.512739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.512829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.512855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.513011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.513043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.513185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.270 [2024-12-05 14:02:01.513211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.270 qpair failed and we were unable to recover it. 00:30:30.270 [2024-12-05 14:02:01.513322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.513349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.513468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.513496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.513585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.513612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.513720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.513750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.513851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.513877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.513962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.513988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.514068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.514094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.514180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.514208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.514296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.514323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.514411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.514445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.514529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.514555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.514646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.514674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.514791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.514818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.514900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.514926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.515020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.515048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.515142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.515169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.515250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.515277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.515359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.515387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.515475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.515502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.515589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.515616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.515748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.515796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.515927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.515973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.516082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.516128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.516241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.516267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.516358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.516384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.516488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.516541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.516657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.516684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.516802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.271 [2024-12-05 14:02:01.516828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.271 qpair failed and we were unable to recover it. 00:30:30.271 [2024-12-05 14:02:01.516917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.516944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.517033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.517060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.517145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.517172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.517313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.517340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.517430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.517468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.517561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.517587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.517676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.517703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.517795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.517821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.517932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.517958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.518052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.518078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.518163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.518189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.518308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.518334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.518426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.518454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.518570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.518599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.518726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.518752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.518839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.518865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.518950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.518978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.519066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.519092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.519205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.519231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.519322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.519348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.519445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.519471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.519553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.519579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.519685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.519710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.519789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.519815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.519906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.519935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.520023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.520049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.520191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.520218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.520332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.520358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.520460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.520487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.520599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.520627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.520737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.520769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.520882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.520908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.521041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.521068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.521197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.521224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.521304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.521331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.521451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.521478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.521619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.521646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.521732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.521781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.521922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.521966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.522077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.522129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.522246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.522271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.522354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.522380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.522471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.522498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.522580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.522606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.522689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.522715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.522837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.522863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.522974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.522999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.523114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.523140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.523237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.523265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.523389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.272 [2024-12-05 14:02:01.523435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.272 qpair failed and we were unable to recover it. 00:30:30.272 [2024-12-05 14:02:01.523529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.523557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.523682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.523709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.523792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.523818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.523906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.523933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.524044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.524071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.524196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.524236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.524330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.524358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.524439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.524467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.524583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.524611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.524727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.524753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.524835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.524862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.524958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.524985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.525097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.525123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.525215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.525242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.525326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.525353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.525434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.525461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.525546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.525572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.525687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.525712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.525824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.525849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.525929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.525955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.526038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.526063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.526144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.526171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.526273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.526302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.526427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.526456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.526555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.526594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.526717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.526750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.526865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.526898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.527003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.527042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.527258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.527290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.527392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.527432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.527567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.527592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.527723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.527753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.527872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.527902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.528013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.528045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.528181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.528216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.528348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.528374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.528476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.528505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.528590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.528616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.528753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.528798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.528908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.528955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.529065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.529099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.529221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.529246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.529343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.529367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.529458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.529484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.529579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.529605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.529687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.529733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.529874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.529919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.530030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.530060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.530244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.530289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.530383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.530408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.530527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.530552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.530634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.530662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.530801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.530848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.530965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.531009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.531090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.531120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.531232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.531259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.531345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.531371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.531464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.531491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.531593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.531632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.531762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.531800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.531893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.531921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.532005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.532030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.532144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.273 [2024-12-05 14:02:01.532169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.273 qpair failed and we were unable to recover it. 00:30:30.273 [2024-12-05 14:02:01.532252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.532277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.532361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.532387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.532483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.532514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.532624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.532675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.532853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.532900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.533033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.533081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.533170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.533195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.533283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.533310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.533406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.533439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.533528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.533554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.533665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.533690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.533772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.533797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.533882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.533907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.534018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.534045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.534171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.534210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.534304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.534332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.534428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.534454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.534543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.534569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.534648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.534679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.534808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.534839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.535034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.535084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.535173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.535199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.535278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.535305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.535390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.535422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.535506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.535531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.535618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.535644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.535723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.535749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.535866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.535891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.536004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.536029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.536123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.536150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.536241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.536265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.536350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.536375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.536504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.536529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.536618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.536644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.536734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.536759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.536900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.536932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.537037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.537068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.537178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.537224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.537358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.537386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.537484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.537511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.537588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.537614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.537730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.537757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.537870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.537903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.538032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.538059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.538175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.538202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.538311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.538351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.538453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.538482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.538577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.538605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.538699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.538725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.538834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.538861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.538943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.538970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.539104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.539131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.539227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.539266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.539367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.539395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.539484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.539512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.539604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.539629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.539718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.539743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.539821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.539846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.539918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.539943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.540062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.540087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.540172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.540200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.540301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.540339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.540428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.540455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.540546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.540572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.540713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.540739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.540858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.540884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.540970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.540995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.274 qpair failed and we were unable to recover it. 00:30:30.274 [2024-12-05 14:02:01.541090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.274 [2024-12-05 14:02:01.541117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.541221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.541261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.541361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.541388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.541493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.541519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.541607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.541633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.541780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.541827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.541917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.541944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.542028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.542056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.542144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.542172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.542295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.542324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.542408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.542446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.542563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.542593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.542684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.542709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.542834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.542866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.542973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.543004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.543152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.543187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.543315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.543343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.543461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.543492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.543609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.543642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.543792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.543825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.543958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.543991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.544102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.544136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.544314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.544341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.544491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.544518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.544634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.544659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.544759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.544785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.544894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.544919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.545012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.545037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.545175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.545202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.545285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.545311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.545426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.545465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.545570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.545596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.545713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.545739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.545829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.545854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.545950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.545976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.546070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.546095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.546183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.546209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.546339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.546378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.546523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.546562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.546704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.546730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.546842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.546868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.546984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.547028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.547143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.547168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.547282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.547309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.547390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.547421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.547542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.547571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.547663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.547689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.547796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.547822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.547915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.547940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.548017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.548043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.548162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.548187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.548265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.548292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.548411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.548444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.548533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.548559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.548683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.548722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.548858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.548897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.549000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.549039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.549159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.549186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.549298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.549325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.549412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.549450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.549531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.549559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.549643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.549669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.549781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.549807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.549920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.549947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.550042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.550068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.550192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.550222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.550341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.550368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.550471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.550501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.550597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.550624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.550778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.550813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.550933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.550983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.551100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.551127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.275 [2024-12-05 14:02:01.551219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.275 [2024-12-05 14:02:01.551248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.275 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.551331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.551359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.551441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.551468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.551554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.551581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.551671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.551717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.551883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.551915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.552070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.552103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.552258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.552284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.552360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.552386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.552476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.552502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.552607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.552632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.552752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.552780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.552896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.552922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.553042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.553073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.553186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.553212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.553304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.553332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.553441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.553468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.553551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.553577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.553669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.553696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.553779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.553805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.553923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.553949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.554043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.554071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.554158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.554195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.554344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.554372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.554473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.554500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.554580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.554606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.554696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.554723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.554839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.554866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.554980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.555008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.555148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.555193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.555287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.555314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.555400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.555430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.555510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.555536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.555616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.555643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.555755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.555781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.555863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.555890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.556006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.556034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.556151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.556180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.556330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.556358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.556448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.556474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.556568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.556594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.556682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.556714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.556827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.556857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.556963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.556994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.557091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.557123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.557261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.557289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.557378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.557404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.557510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.557537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.557646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.557673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.557809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.557854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.558022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.558067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.558159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.558184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.558294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.558319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.558439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.558470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.558570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.558596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.558675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.558702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.558788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.558814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.558911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.558937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.559037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.559077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.559182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.559221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.559318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.559346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.559441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.559468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.559559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.559585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.559674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.559699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.559812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.559841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.559938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.559963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.560074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.560100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.560198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.276 [2024-12-05 14:02:01.560226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.276 qpair failed and we were unable to recover it. 00:30:30.276 [2024-12-05 14:02:01.560326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.560352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.560484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.560524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.560628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.560656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.560835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.560869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.561063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.561096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.561198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.561230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.561363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.561403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.561512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.561540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.561645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.561671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.561758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.561784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.561875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.561901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.562016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.562052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.562173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.562214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.562377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.562404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.562515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.562541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.562630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.562657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.562773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.562799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.562903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.562936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.563077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.563109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.563289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.563349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.563438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.563465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.563581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.563607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.563747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.563795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.563931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.563976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.564074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.564103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.564195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.564221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.564346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.564372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.564476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.564503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.564590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.564616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.564717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.564750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.564889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.564922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.565032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.565065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.565165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.565200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.565342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.565369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.565506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.565534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.565613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.565639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.565716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.565742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.565820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.565845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.565929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.565955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.566047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.566075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.566158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.566186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.566273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.566300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.566376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.566403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.566497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.566526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.566641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.566666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.566788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.566820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.566914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.566955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.567104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.567165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.567282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.567310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.567397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.567427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.567515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.567540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.567675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.567721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.567834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.567875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.568019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.568054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.568167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.568196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.568315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.568343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.568435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.568462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.568554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.568582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.568680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.568706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.568844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.568871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.568961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.568988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.569085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.569111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.569240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.569280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.569393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.569443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.569541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.569567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.569660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.569686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.569804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.569833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.570005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.570037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.570171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.570205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.570331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.570356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.570440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.570467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.570563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.570589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.277 [2024-12-05 14:02:01.570671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.277 [2024-12-05 14:02:01.570697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.277 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.570815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.570841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.570975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.571003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.571126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.571151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.571241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.571266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.571385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.571412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.571521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.571547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.571684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.571724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.571867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.571901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.572042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.572076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.572178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.572212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.572359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.572383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.572507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.572539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.572673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.572712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.572810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.572838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.572929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.572956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.573074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.573100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.573204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.573232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.573326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.573353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.573500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.573530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.573638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.573676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.573781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.573809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.573895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.573922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.574031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.574057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.574146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.574173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.574262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.574289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.574405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.574441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.574540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.574578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.574679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.574709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.574818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.574853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.574996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.575030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.575139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.575185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.575297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.575323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.575402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.575434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.575525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.575554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.575637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.575663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.575773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.575822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.575957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.576002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.576090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.576116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.576199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.576224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.576304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.576332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.576484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.576511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.576601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.576629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.576744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.576771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.576880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.576909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.577004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.577030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.577110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.577136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.577248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.577278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.577392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.577432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.577531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.577557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.577645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.577670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.577805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.577844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.577960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.577993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.578114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.578140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.578293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.578318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.578430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.578458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.578547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.578572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.578681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.578726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.578865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.578916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.579046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.579073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.579263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.579313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.579435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.579462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.579573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.579599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.579686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.579712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.579789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.579815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.579898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.579927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.580013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.580040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.580144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.580183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.580305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.580333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.580469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.580497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.580592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.580619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.580703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.580729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.278 [2024-12-05 14:02:01.580842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.278 [2024-12-05 14:02:01.580869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.278 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.581013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.581041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.581128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.581160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.581250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.581277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.581361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.581387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.581482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.581508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.581588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.581614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.581731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.581757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.581836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.581863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.581973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.581998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.582083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.582108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.582221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.582247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.582330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.582357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.582447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.582476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.582582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.582621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.582719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.582748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.582836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.582861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.582980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.583010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.583113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.583139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.583223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.583250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.583390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.583422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.583512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.583538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.583623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.583649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.583725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.583751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.583863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.583912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.584053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.584079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.584175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.584201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.584316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.584341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.584458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.584485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.584572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.584601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.584703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.584730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.584820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.584846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.584940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.584966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.585055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.585082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.585182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.585211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.585322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.585349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.585438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.585464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.585551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.585578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.585657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.585683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.585797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.585824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.585916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.585943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.586052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.586078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.586161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.586192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.586277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.586303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.586385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.586412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.586533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.586560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.586683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.586710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.586794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.586821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.586921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.586948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.587039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.587066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.587157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.587185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.587275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.587300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.587421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.587448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.587531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.587556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.587642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.587668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.587781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.587815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.587959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.587992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.588129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.588166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.588279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.588309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.588405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.588438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.588531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.588558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.588671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.588718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.588801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.588826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.588929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.588976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.589086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.589137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.589225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.589254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.589364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.589390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.279 [2024-12-05 14:02:01.589482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.279 [2024-12-05 14:02:01.589509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.279 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.589597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.589621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.589701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.589732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.589823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.589849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.589954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.590005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.590115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.590163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.590244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.590270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.590384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.590412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.590511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.590538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.590625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.590653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.590742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.590768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.590849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.590875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.590955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.590981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.591092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.591118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.591247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.591286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.591410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.591445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.591553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.591580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.591677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.591704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.591796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.591822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.591919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.591945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.592038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.592064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.592158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.592183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.592303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.592329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.592436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.592462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.592565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.592591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.592698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.592723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.592838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.592863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.592941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.592966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.593087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.593113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.593231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.593259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.593381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.593406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.593536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.593563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.593642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.593669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.593756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.593806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.593946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.593980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.594115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.594149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.594278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.594317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.594406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.594452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.594557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.594584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.594713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.594761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.594856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.594889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.595007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.595041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.595193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.595222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.595338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.595363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.595450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.595476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.595594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.595619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.595730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.595780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.595873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.595899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.595982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.596008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.596099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.596131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.596226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.596254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.596366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.596406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.596510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.596539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.596641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.596668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.596781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.596807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.596898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.596924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.597034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.597068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.597195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.597230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.597368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.597395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.597501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.597528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.597609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.597635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.597739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.597773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.597895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.597939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.598053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.598088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.598244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.598270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.598374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.598400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.598507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.598535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.598651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.598677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.598814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.598848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.598956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.598995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.599125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.599153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.599234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.599260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.280 [2024-12-05 14:02:01.599373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.280 [2024-12-05 14:02:01.599399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.280 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.599496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.599522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.599633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.599659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.599774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.599800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.599917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.599942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.600034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.600060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.600176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.600202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.600282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.600309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.600427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.600454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.600543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.600570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.600680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.600706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.600852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.600878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.600993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.601019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.601100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.601126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.601208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.601234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.601325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.601351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.601463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.601490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.601628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.601654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.601775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.601800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.601907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.601932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.602024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.602049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.602169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.602195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.602286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.602313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.602395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.602434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.602554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.602580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.602690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.602716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.602824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.602849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.602928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.602954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.603042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.603069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.603171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.603211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.603311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.603350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.603449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.603477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.603598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.603625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.603747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.603784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.603896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.603932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.604042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.604078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.604219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.604258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.604370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.604402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.604507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.604535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.604661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.604686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.604791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.604818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.604903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.604929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.605010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.605039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.605168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.605215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.605342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.605369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.605467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.605494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.605587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.605612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.605708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.605735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.605828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.605853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.605969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.605994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.606099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.606124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.606226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.606252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.606370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.606397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.606497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.606526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.606613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.606640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.606754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.606805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.606944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.606990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.607099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.607147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.607252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.607279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.607382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.607434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.607564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.607593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.607672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.607699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.607810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.607844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.607961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.607996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.608113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.608162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.608265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.608291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.608439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.608475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.281 qpair failed and we were unable to recover it. 00:30:30.281 [2024-12-05 14:02:01.608588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.281 [2024-12-05 14:02:01.608614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.608732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.608766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.608907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.608942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.609070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.609108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.609230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.609283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.609425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.609452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.609588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.609614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.609696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.609722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.609854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.609901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.609988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.610016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.610106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.610136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.610264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.610303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.610427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.610456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.610539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.610566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.610660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.610686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.610820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.610854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.611002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.611049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.611164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.611201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.611316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.611343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.611438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.611479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.611571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.611598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.611695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.611727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.611819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.611847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.611933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.611960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.612107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.612152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.612265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.612292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.612383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.612409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.612504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.612531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.612621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.612648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.612740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.612766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.612846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.612873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.612982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.613011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.613107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.613134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.613223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.613250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.613339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.613366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.613456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.613483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.613568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.613594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.613751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.613801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.613919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.613966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.614100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.614135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.614230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.614256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.614345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.614372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.614475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.614504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.614619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.614646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.614767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.614794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.614907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.614933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.615020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.615048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.615128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.615154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.282 qpair failed and we were unable to recover it. 00:30:30.282 [2024-12-05 14:02:01.615247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.282 [2024-12-05 14:02:01.615274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.565 qpair failed and we were unable to recover it. 00:30:30.565 [2024-12-05 14:02:01.615364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.565 [2024-12-05 14:02:01.615390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.615499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.615526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.615673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.615699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.615784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.615811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.615927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.615961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.616079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.616114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.616254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.616282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.616406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.616448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.616541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.616579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.616683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.616718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.616865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.616899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.617018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.617062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.617195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.617228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.617340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.617368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.617577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.617604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.617712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.617745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.617905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.617948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.618085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.618111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.618298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.618332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.618442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.618469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.618575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.618602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.618683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.618710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.618825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.618859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.618979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.619021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.619168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.619205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.619361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.619387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.619496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.619523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.619613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.619639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.619775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.619806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.619940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.619975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.620115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.620161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.620307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.620341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.620499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.620526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.620643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.620670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.620811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.620857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.620992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.621040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.621173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.621209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.566 [2024-12-05 14:02:01.621328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.566 [2024-12-05 14:02:01.621354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.566 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.621441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.621467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.621551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.621578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.621667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.621693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.621859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.621886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.621976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.622004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.622130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.622192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.622293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.622321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.622414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.622454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.622548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.622576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.622663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.622689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.622781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.622807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.622917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.622945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.623021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.623047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.623128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.623155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.623276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.623303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.623408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.623454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.623553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.623580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.623673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.623701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.623794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.623821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.623903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.623930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.624036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.624062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.624146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.624172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.624253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.624280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.624373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.624402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.624505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.624532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.624618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.624644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.624720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.624747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.624860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.624886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.625005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.625031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.625115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.625141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.625232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.625265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.625351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.625379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.625484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.625513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.625631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.567 [2024-12-05 14:02:01.625658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.567 qpair failed and we were unable to recover it. 00:30:30.567 [2024-12-05 14:02:01.625778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.625804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.625914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.625940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.626071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.626125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.626209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.626234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.626354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.626383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.626476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.626503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.626588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.626614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.626763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.626810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.626956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.626997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.627077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.627103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.627225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.627251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.627345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.627372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.627478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.627506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.627595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.627621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.627736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.627776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.627899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.627927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.628017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.628045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.628135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.628162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.628253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.628281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.628368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.628395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.628496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.628522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.628614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.628654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.628816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.628843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.629006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.629047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.629255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.629282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.629372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.629399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.629499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.629526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.629632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.629660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.629859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.629900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.630046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.630080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.630195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.630222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.630321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.630347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.630436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.630476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.630569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.630596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.630750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.630789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.630898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.568 [2024-12-05 14:02:01.630935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.568 qpair failed and we were unable to recover it. 00:30:30.568 [2024-12-05 14:02:01.631073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.631109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.631232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.631279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.631378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.631423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.631540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.631568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.631674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.631709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.631792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.631818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.631892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.631918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.632003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.632028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.632133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.632172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.632277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.632305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.632427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.632455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.632546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.632572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.632661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.632687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.632764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.632790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.632881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.632907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.632999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.633038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.633133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.633161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.633258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.633285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.633372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.633400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.633502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.633531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.633611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.633637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.633748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.633776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.633868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.633897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.633981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.634010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.634098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.634126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.634223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.634249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.634341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.634367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.634458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.634493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.634582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.634608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.634719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.634744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.634829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.634856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.634985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.635011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.635109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.635137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.635255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.635283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.635368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.635395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.635494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.635521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.635602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.635651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.569 [2024-12-05 14:02:01.635786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.569 [2024-12-05 14:02:01.635819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.569 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.635937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.635965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.636073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.636099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.636190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.636217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.636336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.636363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.636451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.636477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.636558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.636584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.636685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.636718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.636874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.636907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.637012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.637038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.637122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.637150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.637240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.637269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.637387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.637414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.637517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.637545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.637632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.637660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.637810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.637837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.637919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.637946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.638066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.638093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.638180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.638207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.638290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.638318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.638409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.638444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.638524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.638550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.638631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.638657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.638738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.638764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.638902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.638928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.639065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.639099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.639212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.639238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.639324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.639349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.639439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.639466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.639560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.639587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.639691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.570 [2024-12-05 14:02:01.639723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.570 qpair failed and we were unable to recover it. 00:30:30.570 [2024-12-05 14:02:01.639811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.639838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.639975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.640012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.640175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.640214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.640367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.640394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.640497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.640525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.640656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.640682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.640770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.640796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.640914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.640942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.641025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.641053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.641185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.641219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.641387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.641436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.641555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.641581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.641665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.641691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.641811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.641837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.641928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.641955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.642039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.642065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.642151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.642176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.642288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.642315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.642432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.642463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.642554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.642580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.642662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.642688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.642799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.642824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.642908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.642953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.643066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.643092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.643198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.643224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.643310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.643336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.643428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.643456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.643548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.643586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.643698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.643728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.643837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.643864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.643981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.644007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.644091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.644121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.644209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.644235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.644324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.644349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.571 [2024-12-05 14:02:01.644466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.571 [2024-12-05 14:02:01.644492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.571 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.644569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.644601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.644694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.644721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.644813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.644839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.644924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.644952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.645079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.645105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.645193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.645223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.645350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.645377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.645470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.645497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.645613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.645646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.645744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.645771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.645884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.645911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.645996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.646022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.646111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.646150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.646262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.646289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.646374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.646400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.646522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.646548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.646658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.646683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.646797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.646822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.646915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.646943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.647036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.647063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.647168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.647208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.647335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.647364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.647453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.647481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.647596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.647622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.647732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.647759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.647870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.647896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.648032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.648059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.648157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.648184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.648291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.648317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.648404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.648440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.648564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.648593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.648698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.648743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.572 qpair failed and we were unable to recover it. 00:30:30.572 [2024-12-05 14:02:01.648841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.572 [2024-12-05 14:02:01.648869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.649011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.649038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.649153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.649199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.649285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.649311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.649401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.649439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.649563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.649589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.649671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.649697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.649771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.649797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.649892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.649919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.650003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.650030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.650123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.650149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.650241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.650267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.650380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.650408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.650513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.650539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.650626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.650652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.650758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.650783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.650863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.650888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.650970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.650997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.651086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.651111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.651231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.651271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.651357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.651385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.651480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.651509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.651629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.651656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.651744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.651769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.651877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.651903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.652011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.652046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.652171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.652211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.652324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.652349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.652472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.652500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.652607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.652641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.652772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.652797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.652911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.652937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.653023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.653049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.653133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.653159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.653256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.653282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.653389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.653415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.653503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.573 [2024-12-05 14:02:01.653529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.573 qpair failed and we were unable to recover it. 00:30:30.573 [2024-12-05 14:02:01.653619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.653644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.653727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.653753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.653878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.653904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.653994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.654020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.654116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.654155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.654305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.654333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.654415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.654449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.654537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.654564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.654651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.654678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.654763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.654791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.654880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.654907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.654988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.655015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.655107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.655133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.655227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.655254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.655364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.655390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.655529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.655568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.655665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.655692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.655772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.655797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.655884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.655910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.656016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.656042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.656153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.656178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.656291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.656316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.656402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.656433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.656545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.656570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.656675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.656701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.656815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.656841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.656926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.656952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.657044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.657070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.657156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.657181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.657288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.657319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.657439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.657468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.657558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.657586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.657670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.657696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.657788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.657816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.657929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.657956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.658036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.658070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.658181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.658207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.658320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.574 [2024-12-05 14:02:01.658346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.574 qpair failed and we were unable to recover it. 00:30:30.574 [2024-12-05 14:02:01.658441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.658470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.658561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.658588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.658669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.658695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.658770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.658796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.658891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.658919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.659013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.659044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.659157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.659196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.659287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.659314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.659404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.659439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.659552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.659578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.659664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.659690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.659795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.659821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.659907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.659932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.660049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.660077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.660167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.660192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.660288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.660316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.660428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.660455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.660567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.660595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.660699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.660725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.660810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.660836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.660941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.660966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.661057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.661083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.661169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.661195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.661307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.661333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.661449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.661475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.661565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.661592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.661725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.661751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.661864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.575 [2024-12-05 14:02:01.661889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.575 qpair failed and we were unable to recover it. 00:30:30.575 [2024-12-05 14:02:01.661976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.662001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.662094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.662121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.662206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.662235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.662342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.662372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.662497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.662524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.662623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.662651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.662747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.662775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.662873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.662898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.662990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.663015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.663094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.663126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.663259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.663304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.663448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.663473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.663557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.663581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.663672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.663703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.663803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.663830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.663943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.663969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.664056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.664082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.664176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.664201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.664351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.664391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.664533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.664572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.664663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.664692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.664785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.664812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.664905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.664931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.665025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.665053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.665192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.665218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.665325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.665365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.665523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.665552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.665666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.665693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.665804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.665832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.665952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.665977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.666065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.666101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.666184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.666211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.666295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.666321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.666402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.666442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.666555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.666580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.666662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.576 [2024-12-05 14:02:01.666688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.576 qpair failed and we were unable to recover it. 00:30:30.576 [2024-12-05 14:02:01.666774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.666799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.666875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.666900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.666979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.667005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.667091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.667118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.667231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.667259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.667349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.667378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.667484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.667512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.667596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.667622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.667749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.667776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.667890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.667916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.668004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.668031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.668121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.668147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.668236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.668262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.668353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.668379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.668503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.668533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.668634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.668661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.668747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.668774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.668888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.668915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.669008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.669034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.669125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.669154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.669250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.669278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.669430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.669458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.669580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.669606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.669698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.669724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.669814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.669839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.669928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.669953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.670038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.670066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.670177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.670205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.670294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.670320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.670410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.670444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.670525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.670552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.670639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.670666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.670779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.670804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.670887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.670914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.671003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.671034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.671126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.671152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.577 [2024-12-05 14:02:01.671264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.577 [2024-12-05 14:02:01.671304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.577 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.671430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.671464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.671556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.671583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.671671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.671697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.671788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.671814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.671921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.671947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.672025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.672050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.672132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.672158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.672243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.672269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.672349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.672375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.672467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.672493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.672576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.672602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.672696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.672721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.672828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.672854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.672933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.672961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.673075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.673104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.673206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.673245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.673333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.673362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.673476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.673504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.673598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.673625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.673706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.673733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.673824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.673850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.673961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.673988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.674078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.674105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.674245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.674272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.674360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.674389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.674486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.674515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.674634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.674662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.674778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.674803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.674886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.674915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.675004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.675047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.675176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.675200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.675383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.675409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.675522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.675549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.675640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.675665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.675754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.675779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.578 qpair failed and we were unable to recover it. 00:30:30.578 [2024-12-05 14:02:01.675871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.578 [2024-12-05 14:02:01.675896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.675986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.676012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.676103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.676127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.676269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.676297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.676379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.676405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.676522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.676548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.676632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.676659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.676740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.676765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.676862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.676887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.676973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.676999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.677095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.677134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.677249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.677277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.677394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.677426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.677517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.677544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.677623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.677649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.677767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.677795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.677898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.677925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.678021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.678068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.678200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.678232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.678363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.678389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.678519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.678547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.678660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.678686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.678761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.678788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.678872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.678898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.679020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.679069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.679169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.679201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.679339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.679363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.679455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.679481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.679596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.679622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.679718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.679763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.679878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.679906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.679987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.680015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.680104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.680132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.680243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.680275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.680380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.680405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.680525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.680553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.680647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.680673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.680812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.579 [2024-12-05 14:02:01.680839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.579 qpair failed and we were unable to recover it. 00:30:30.579 [2024-12-05 14:02:01.680950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.680976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.681059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.681085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.681164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.681190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.681307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.681333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.681478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.681505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.681623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.681649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.681740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.681766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.681855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.681882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.681994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.682021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.682125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.682151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.682248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.682274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.682391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.682429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.682518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.682544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.682632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.682658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.682745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.682772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.682852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.682878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.682973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.682999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.683079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.683105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.683228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.683263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.683360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.683385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.683479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.683504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.683622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.683652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.683792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.683823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.683938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.683963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.684112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.684143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.684280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.684309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.684402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.684438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.684523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.684548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.684635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.684660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.684749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.684776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.684895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.684931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.685040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.685066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.685199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.685232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.580 qpair failed and we were unable to recover it. 00:30:30.580 [2024-12-05 14:02:01.685384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.580 [2024-12-05 14:02:01.685410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.685523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.685549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.685657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.685696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.685802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.685830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.685918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.685945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.686030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.686057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.686147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.686173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.686283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.686309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.686399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.686431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.686516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.686541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.686634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.686660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.686743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.686769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.686855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.686884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.687002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.687028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.687107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.687134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.687226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.687251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.687378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.687425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.687530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.687558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.687650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.687678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.687763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.687790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.687869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.687896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.687984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.688012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.688094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.688121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.688230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.688257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.688370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.688396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.688492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.688524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.688624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.688663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.688743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.688771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.688862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.688889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.688976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.689002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.689094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.689122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.689239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.689267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.689350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.689376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.689471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.689500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.689616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.689645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.689734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.689761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.689873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.689899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.689983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.690008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.581 [2024-12-05 14:02:01.690096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.581 [2024-12-05 14:02:01.690121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.581 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.690209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.690236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.690340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.690366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.690473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.690512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.690660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.690688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.690781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.690808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.690916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.690942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.691061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.691088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.691175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.691201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.691288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.691315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.691397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.691432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.691513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.691539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.691672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.691705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.691856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.691888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.692017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.692045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.692138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.692165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.692284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.692310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.692396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.692430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.692538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.692563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.692647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.692672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.692751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.692777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.692856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.692882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.692992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.693019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.693131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.693156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.693231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.693256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.693370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.693396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.693486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.693514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.693626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.693657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.693746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.693772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.693915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.693941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.694038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.694064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.694186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.694212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.694303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.694331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.694423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.694449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.694540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.694565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.694650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.694676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.582 qpair failed and we were unable to recover it. 00:30:30.582 [2024-12-05 14:02:01.694764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.582 [2024-12-05 14:02:01.694790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.694868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.694894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.695005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.695031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.695121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.695146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.695239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.695270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.695370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.695398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.695490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.695517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.695593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.695619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.695708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.695733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.695873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.695899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.696020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.696063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.696177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.696203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.696281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.696307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.696415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.696448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.696532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.696557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.696639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.696665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.696751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.696777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.696882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.696907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.697039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.697078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.697170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.697198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.697289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.697318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.697408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.697444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.697561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.697594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.697689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.697714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.697865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.697910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.698017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.698043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.698134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.698160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.698257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.698283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.698371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.698403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.698518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.698545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.583 qpair failed and we were unable to recover it. 00:30:30.583 [2024-12-05 14:02:01.698629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.583 [2024-12-05 14:02:01.698656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.698741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.698767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.698913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.698940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.699030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.699056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.699147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.699173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.699292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.699323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.699430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.699458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.699564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.699590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.699676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.699703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.699790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.699818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.699940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.699967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.700080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.700106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.700193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.700220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.700324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.700353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.700458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.700485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.700565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.700595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.700688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.700714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.700818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.700856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.700990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.701034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.701174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.701206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.701319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.701361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.701509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.701542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.701637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.701663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.701788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.701814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.701960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.702007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.702126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.702161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.702268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.702312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.702429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.702455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.702564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.702590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.702696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.702735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.702847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.702874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.702998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.703031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.703163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.703196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.703307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.703351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.703480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.703519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.703619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.703649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.703734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.584 [2024-12-05 14:02:01.703762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.584 qpair failed and we were unable to recover it. 00:30:30.584 [2024-12-05 14:02:01.703849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.703876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.704018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.704044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.704131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.704159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.704250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.704276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.704376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.704414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.704536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.704575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.704690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.704725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.704956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.705001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.705109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.705142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.705273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.705298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.705380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.705406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.705554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.705580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.705695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.705720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.705805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.705830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.705907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.705933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.706045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.706070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.706151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.706176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.706265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.706291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.706409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.706450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.706536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.706562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.706641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.706666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.706772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.706797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.706897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.706923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.707016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.707042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.707136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.707161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.707270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.707296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.707383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.707410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.707512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.707538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.707646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.707672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.707783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.707809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.707895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.707921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.585 qpair failed and we were unable to recover it. 00:30:30.585 [2024-12-05 14:02:01.708031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.585 [2024-12-05 14:02:01.708057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.708140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.708166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.708252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.708277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.708364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.708390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.708494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.708521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.708604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.708629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.708712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.708738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.708853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.708879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.709015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.709041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.709157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.709183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.709264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.709290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.709373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.709399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.709506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.709532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.709620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.709646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.709755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.709785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.709875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.709902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.709988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.710013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.710102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.710127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.710244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.710270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.710357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.710383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.710479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.710505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.710615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.710641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.710721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.710747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.710844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.710870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.710962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.711001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.711096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.711124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.711210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.711239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.711379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.711406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.711513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.711541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.711655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.711682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.711773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.711799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.711883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.711910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.712007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.712033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.712114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.712142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.712251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.712276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.712361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.712387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.586 qpair failed and we were unable to recover it. 00:30:30.586 [2024-12-05 14:02:01.712509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.586 [2024-12-05 14:02:01.712536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.712622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.712647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.712733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.712760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.712850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.712876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.712965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.712991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.713098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.713137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.713272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.713299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.713415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.713537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.713622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.713648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.713775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.713823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.713934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.713967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.714109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.714135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.714226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.714252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.714362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.714388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.714488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.714515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.714640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.714668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.714751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.714777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.714863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.714890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.714977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.715010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.715111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.715137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.715213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.715239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.715332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.715358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.715523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.715561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.715696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.715735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.715855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.715882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.715977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.716003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.716088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.716114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.716203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.716230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.716311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.716339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.716457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.716487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.716611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.716638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.716751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.716789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.716942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.716975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.717086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.717120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.717242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.717267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.717361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.717389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.717500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.717539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.717680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.587 [2024-12-05 14:02:01.717714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.587 qpair failed and we were unable to recover it. 00:30:30.587 [2024-12-05 14:02:01.717848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.717882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.718022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.718054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.718168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.718194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.718313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.718344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.718485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.718511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.718594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.718620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.718701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.718727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.718843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.718877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.719042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.719077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.719229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.719271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.719394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.719427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.719542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.719568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.719645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.719670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.719787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.719830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.720031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.720064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.720202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.720234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.720356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.720381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.720525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.720564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.720658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.720687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.720827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.720875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.720991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.721043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.721213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.721259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.721365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.721391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.721542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.721573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.721674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.721720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.721836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.721879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.721988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.722020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.722181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.722215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.722321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.722347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.722561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.722588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.722796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.722830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.722953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.722998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.723117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.723159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.723300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.723333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.723463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.723490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.723579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.723605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.723720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.588 [2024-12-05 14:02:01.723746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.588 qpair failed and we were unable to recover it. 00:30:30.588 [2024-12-05 14:02:01.723862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.723902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.724103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.724138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.724279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.724313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.724430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.724466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.724609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.724636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.724721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.724747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.724833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.724880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.724997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.725030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.725162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.725188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.725339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.725378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.725485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.725514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.725607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.725636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.725752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.725779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.725871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.725898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.726018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.726045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.726132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.726159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.726251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.726290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.726425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.726453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.726537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.726563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.726650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.726677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.726814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.726841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.726924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.726951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.727038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.727067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.727223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.727263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.727364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.727392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.727512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.727538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.727652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.727684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.727826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.727852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.727994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.728027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.728142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.728171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.728286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.728313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.728409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.728456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.728544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.728570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.728683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.728731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.728869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.728914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.729050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.729094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.729207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.729232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.729342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.589 [2024-12-05 14:02:01.729373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.589 qpair failed and we were unable to recover it. 00:30:30.589 [2024-12-05 14:02:01.729478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.729504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.729591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.729616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.729733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.729758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.729843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.729870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.729987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.730013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.730103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.730130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.730219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.730244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.730361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.730387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.730479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.730508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.730601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.730629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.730738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.730765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.730894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.730920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.731031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.731058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.731173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.731201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.731295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.731327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.731439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.731466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.731546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.731573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.731662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.731688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.731776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.731804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.731892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.731918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.732053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.732078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.732213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.732239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.732395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.732444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.732566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.732602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.732723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.732760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.732871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.732908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.733057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.733093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.733238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.733272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.733381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.733407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.733499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.733526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.733620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.733645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.733784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.733818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.734016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.734050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.734174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.590 [2024-12-05 14:02:01.734208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.590 qpair failed and we were unable to recover it. 00:30:30.590 [2024-12-05 14:02:01.734345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.734371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.734486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.734514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.734633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.734659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.734778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.734804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.734899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.734925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.735007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.735063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.735187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.735214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.735291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.735317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.735401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.735436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.735524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.735550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.735660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.735704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.735845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.735880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.736012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.736037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.736257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.736291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.736412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.736476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.736586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.736614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.736725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.736751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.736903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.736937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.737097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.737131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.737269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.737296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.737433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.737460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.737551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.737578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.737691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.737717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.737802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.737846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.737962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.737989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.738153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.738188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.738313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.738339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.738426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.738453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.738540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.738566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.738647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.738674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.738791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.738826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.739022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.739056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.739231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.739265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.739381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.739408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.739528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.739554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.739637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.739664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.591 [2024-12-05 14:02:01.739777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.591 [2024-12-05 14:02:01.739803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.591 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.739899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.739925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.740065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.740099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.740216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.740264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.740407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.740442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.740581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.740608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.740702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.740728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.740807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.740833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.740959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.740993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.741201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.741242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.741387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.741413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.741535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.741561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.741672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.741698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.741786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.741812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.741986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.742013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.742221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.742254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.742395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.742428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.742521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.742548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.742641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.742667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.742780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.742806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.742890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.742918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.743051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.743085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.743199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.743234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.743382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.743408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.743505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.743531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.743614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.743640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.743753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.743780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.743918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.743961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.744112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.744146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.744322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.744375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.744472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.744499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.744610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.744636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.744799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.744832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.745035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.745069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.745182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.745237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.745373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.745406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.745544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.745583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.745721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.745779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.592 [2024-12-05 14:02:01.745865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.592 [2024-12-05 14:02:01.745893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.592 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.746014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.746041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.746154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.746180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.746294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.746320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.746401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.746435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.746550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.746577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.746696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.746722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.746831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.746857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.746951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.746978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.747064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.747091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.747207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.747234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.747346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.747378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.747470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.747497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.747592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.747618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.747704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.747730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.747822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.747848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.747929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.747954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.748039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.748064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.748148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.748173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.748254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.748282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.748395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.748427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.748551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.748577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.748655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.748681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.748856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.748890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.749025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.749058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.749239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.749265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.749377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.749404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.749514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.749540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.749673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.749699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.749831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.749865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.749979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.750014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.750223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.750272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.750387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.750413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.750544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.750571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.750666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.750693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.750783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.750810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.750950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.593 [2024-12-05 14:02:01.750999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.593 qpair failed and we were unable to recover it. 00:30:30.593 [2024-12-05 14:02:01.751087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.751114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.751249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.751287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.751439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.751467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.751576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.751625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.751785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.751833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.751982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.752028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.752150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.752178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.752268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.752296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.752386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.752413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.752506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.752533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.752614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.752641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.752758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.752786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.752876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.752904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.753000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.753026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.753138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.753169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.753259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.753288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.753405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.753437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.753525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.753551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.753636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.753662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.753744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.753769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.753888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.753913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.754052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.754077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.754185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.754210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.754293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.754319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.754433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.754460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.754551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.754576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.754690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.754716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.754797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.754822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.754910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.754936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.755053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.755080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.594 [2024-12-05 14:02:01.755176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.594 [2024-12-05 14:02:01.755204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.594 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.755291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.755317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.755406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.755440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.755561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.755588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.755670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.755697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.755798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.755825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.755913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.755941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.756029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.756056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.756139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.756166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.756253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.756280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.756387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.756413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.756536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.756575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.756673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.756701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.756788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.756814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.756905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.756932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.757027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.757053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.757134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.757160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.757270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.757297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.757399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.757446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.757535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.757562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.757642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.757668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.757774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.757808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.757977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.758012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.758164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.758191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.758301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.758327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.758468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.758507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.758628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.758663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.758779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.758815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.758952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.758988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.759088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.759123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.759268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.759302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.759426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.759454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.759575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.759604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.759705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.759739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.759869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.759917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.759993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.760020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.760135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.760161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.760241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.595 [2024-12-05 14:02:01.760268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.595 qpair failed and we were unable to recover it. 00:30:30.595 [2024-12-05 14:02:01.760368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.760395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.760506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.760535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.760625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.760651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.760734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.760778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.760894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.760943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.761092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.761125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.761247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.761288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.761372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.761398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.761491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.761522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.761648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.761677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.761894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.761929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.762052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.762087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.762259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.762293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.762438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.762470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.762573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.762602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.762717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.762744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.762835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.762861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.762953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.762979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.763069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.763096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.763180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.763206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.763294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.763322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.763440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.763472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.763563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.763590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.763716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.763742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.763848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.763874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.763969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.763995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.764121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.764155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.764308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.764351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.596 [2024-12-05 14:02:01.764475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.596 [2024-12-05 14:02:01.764501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.596 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.764588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.764614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.764695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.764721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.764825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.764859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.765028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.765062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.765205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.765238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.765356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.765383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.765506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.765533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.765618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.765645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.765784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.765818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.765979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.766013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.766141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.766167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.766337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.766376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.766494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.766522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.766644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.766670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.766772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.766807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.766966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.767012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.767092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.767117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.767204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.767229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.767317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.767342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.767440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.767466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.767582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.767608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.767722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.767748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.767828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.767853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.767978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.768006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.768121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.768166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.768264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.768292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.768388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.768414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.768518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.768544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.768643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.768677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.768840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.768892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.768978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.769003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.769093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.769123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.769208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.769235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.769329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.769356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.769453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.769481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.597 [2024-12-05 14:02:01.769577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.597 [2024-12-05 14:02:01.769604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.597 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.769697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.769725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.769815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.769841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.769953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.769979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.770092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.770118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.770206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.770232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.770328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.770357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.770475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.770502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.770620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.770646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.770763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.770796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.770963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.770989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.771148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.771181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.771319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.771352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.771486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.771512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.771599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.771644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.771780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.771812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.771950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.771993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.772127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.772165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.772285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.772313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.772401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.772436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.772532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.772558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.772689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.772737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.772828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.772853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.772939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.772964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.773059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.773085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.773207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.773236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.773367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.773395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.773508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.773536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.773623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.773669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.773823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.773857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.773986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.774017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.774159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.774202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.774348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.774374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.774513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.598 [2024-12-05 14:02:01.774540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.598 qpair failed and we were unable to recover it. 00:30:30.598 [2024-12-05 14:02:01.774626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.774652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.774735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.774782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.774914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.774947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.775108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.775140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.775324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.775351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.775446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.775475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.775590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.775619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.775747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.775780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.775913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.775960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.776098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.776139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.776255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.776289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.776388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.776437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.776524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.776549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.776639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.776666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.776802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.776827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.776939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.776965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.777083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.777109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.777222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.777250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.777387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.777413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.777529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.777554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.777654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.777684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.777809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.777839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.777914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.777944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.778030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.778057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.778173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.778199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.778338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.778364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.778459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.778486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.778580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.778627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.778753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.778785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.778927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.599 [2024-12-05 14:02:01.778959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.599 qpair failed and we were unable to recover it. 00:30:30.599 [2024-12-05 14:02:01.779063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.779095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.779194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.779239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.779326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.779351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.779489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.779516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.779621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.779647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.779764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.779795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.779955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.779987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.780127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.780159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.780301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.780329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.780431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.780456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.780549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.780574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.780689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.780715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.780825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.780852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.780993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.781019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.781135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.781163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.781251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.781277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.781371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.781396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.781506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.781531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.781622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.781646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.781773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.781812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.781962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.781989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.782067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.782092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.782209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.782235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.782359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.782387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.782485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.782510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.782631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.782658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.782803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.782835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.782999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.783030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.783130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.783177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.783321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.783347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.783482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.783509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.783634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.783660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.783763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.783799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.783997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.784029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.784187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.784235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.600 qpair failed and we were unable to recover it. 00:30:30.600 [2024-12-05 14:02:01.784375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.600 [2024-12-05 14:02:01.784403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.784558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.784591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.784680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.784704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.784863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.784906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.785105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.785136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.785241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.785270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.785373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.785397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.785509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.785548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.785665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.785692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.785773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.785800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.785933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.785976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.786098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.786131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.786267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.786298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.786441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.786468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.786589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.786615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.786699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.786725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.786820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.786865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.786996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.787028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.787157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.787189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.787323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.787352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.787486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.787525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.787627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.787665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.787819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.787846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.787974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.788020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.788100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.788129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.788247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.788275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.788366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.788393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.788501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.788539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.788661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.788691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.788781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.788807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.788900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.788928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.789044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.789090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.789203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.789228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.789311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.789339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.789431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.789460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.789550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.789577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.789719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.601 [2024-12-05 14:02:01.789744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.601 qpair failed and we were unable to recover it. 00:30:30.601 [2024-12-05 14:02:01.789833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.789857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.790011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.790037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.790112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.790136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.790257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.790284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.790385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.790435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.790544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.790573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.790681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.790710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.790904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.790947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.791033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.791058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.791174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.791199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.791341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.791367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.791457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.791482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.791568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.791595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.791711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.791736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.791869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.791909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.792028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.792056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.792141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.792168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.792288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.792314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.792400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.792434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.792520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.792546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.792650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.792679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.792807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.792833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.792941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.792966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.793075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.793101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.793216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.793246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.793358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.793384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.793521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.793549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.793641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.793674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.793822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.793847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.793964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.794008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.794144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.794175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.794303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.794346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.794442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.794469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.794550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.794576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.794655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.602 [2024-12-05 14:02:01.794681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.602 qpair failed and we were unable to recover it. 00:30:30.602 [2024-12-05 14:02:01.794817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.794847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.794981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.795011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.795116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.795144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.795274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.795306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.795464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.795491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.795574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.795599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.795710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.795748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.795837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.795863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.795995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.796043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.796156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.796198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.796310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.796335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.796477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.796505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.796594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.796621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.796705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.796730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.796843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.796870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.796953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.796978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.797075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.797101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.797189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.797214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.797346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.797374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.797489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.797528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.797678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.797716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.797862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.797889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.797977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.798005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.798087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.798113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.798252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.798282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.798381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.798413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.798539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.798568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.798660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.798690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.798811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.798854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.798993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.799037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.799132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.799157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.799274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.799299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.799465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.799511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.799610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.799658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.799763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.603 [2024-12-05 14:02:01.799795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.603 qpair failed and we were unable to recover it. 00:30:30.603 [2024-12-05 14:02:01.799942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.799968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.800083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.800109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.800231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.800260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.800377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.800403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.800540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.800569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.800684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.800710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.800799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.800826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.800918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.800944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.801058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.801085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.801171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.801197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.801290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.801317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.801443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.801480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.801572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.801598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.801682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.801708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.801796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.801824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.801936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.801963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.802039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.802065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.802141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.802167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.802254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.802281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.802370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.802409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.802535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.802575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.802671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.802700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.802811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.802837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.802939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.802967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.803083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.803115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.803202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.803228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.803321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.803348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.803456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.803483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.803624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.803656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.803785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.803816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.804004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.804035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.804153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.804182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.804306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.604 [2024-12-05 14:02:01.804336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.604 qpair failed and we were unable to recover it. 00:30:30.604 [2024-12-05 14:02:01.804429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.804456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.804546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.804572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.804675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.804704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.804835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.804860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.804981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.805025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.805147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.805172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.805252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.805277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.805387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.805413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.805550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.805576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.805664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.805690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.805787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.805813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.805898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.805923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.806009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.806034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.806123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.806149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.806276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.806315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.806430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.806478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.806640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.806679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.806803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.806832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.806930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.806957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.807152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.807178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.807266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.807294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.807380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.807407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.807516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.807543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.807687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.807713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.807808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.807838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.807958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.807988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.808115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.808145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.808243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.808275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.808371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.808415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.808532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.808559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.808657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.808696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.808810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.808863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.808996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.809045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.809157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.809184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.809287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.809317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.809424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.809473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.809569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.809596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.809683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.605 [2024-12-05 14:02:01.809708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.605 qpair failed and we were unable to recover it. 00:30:30.605 [2024-12-05 14:02:01.809830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.809855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.809938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.809967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.810113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.810141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.810264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.810293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.810407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.810442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.810528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.810554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.810669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.810720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.810862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.810894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.810993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.811024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.811123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.811154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.811291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.811316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.811438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.811466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.811582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.811609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.811697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.811724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.811867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.811899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.812014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.812057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.812195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.812239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.812353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.812397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.812547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.812587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.812710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.812757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.812926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.812960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.813078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.813104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.813233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.813266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.813371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.813397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.813506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.813533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.813623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.813652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.813732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.813757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.813883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.813914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.814060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.814104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.814187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.814212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.814321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.814346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.814436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.814464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.814565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.814591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.814671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.814696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.814792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.814817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.814915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.814954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.815053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.815080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.815195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.815221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.815316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.815343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.815470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.606 [2024-12-05 14:02:01.815499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.606 qpair failed and we were unable to recover it. 00:30:30.606 [2024-12-05 14:02:01.815585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.815611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.815722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.815754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.815907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.815952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.816037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.816064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.816181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.816206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.816297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.816323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.816467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.816494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.816584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.816610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.816720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.816745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.816862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.816888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.817008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.817037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.817152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.817178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.817262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.817288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.817369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.817394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.817494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.817523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.817606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.817632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.817722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.817748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.817844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.817876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.818016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.818042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.818173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.818200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.818316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.818351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.818444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.818471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.818612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.818637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.818770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.818799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.818959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.819004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.819124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.819152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.819246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.819272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.819359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.819385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.819505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.819531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.819626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.819657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.819804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.819850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.819971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.819996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.820078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.820104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.820210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.820249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.820347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.820376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.820505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.820533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.820676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.820723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.820842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.820888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.821031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.821059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.821172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.821199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.821297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.607 [2024-12-05 14:02:01.821322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.607 qpair failed and we were unable to recover it. 00:30:30.607 [2024-12-05 14:02:01.821402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.821434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.821548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.821575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.821687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.821713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.821832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.821858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.821964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.821989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.822076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.822103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.822214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.822253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.822350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.822378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.822507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.822545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.822650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.822678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.822798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.822825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.822957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.822988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.823096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.823143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.823254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.823287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.823400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.823435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.823557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.823586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.823667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.823692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.823772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.823798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.823884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.823911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.824029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.824060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.824167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.824194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.824312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.824340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.824462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.824494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.824590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.824617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.824698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.824724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.824844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.824897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.825030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.825061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.825222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.825252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.825389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.825414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.825543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.825568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.825642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.825667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.825782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.825828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.825973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.826005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.826115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.826149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.826296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.826323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.826454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.826493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.826619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.826648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.826762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.826808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.826933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.826960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.827098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.827129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.827238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.608 [2024-12-05 14:02:01.827264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.608 qpair failed and we were unable to recover it. 00:30:30.608 [2024-12-05 14:02:01.827383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.827410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.827508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.827533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.827622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.827646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.827862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.827895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.828001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.828032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.828158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.828196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.828423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.828461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.828580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.828605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.828700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.828743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.828881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.828928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.829071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.829102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.829205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.829236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.829353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.829379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.829481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.829510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.829607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.829633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.829740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.829771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.829906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.829942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.830088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.830150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.830289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.830328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.830478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.830507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.830606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.830633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.830754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.830786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.830921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.830954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.831083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.831116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.831250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.831284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.831373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.831401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.831531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.831559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.831682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.831728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.831836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.831877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.831992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.832019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.832193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.832223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.832353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.832391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.832512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.832552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.832685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.832714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.832806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.832833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.832917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.832948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.833052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.833082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.609 qpair failed and we were unable to recover it. 00:30:30.609 [2024-12-05 14:02:01.833207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.609 [2024-12-05 14:02:01.833236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.833357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.833396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.833499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.833525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.833650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.833702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.833808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.833837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.833938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.833964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.834059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.834087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.834202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.834229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.834328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.834364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.834496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.834535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.834661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.834689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.834804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.834829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.834908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.834952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.835045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.835075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.835173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.835204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.835302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.835329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.835471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.835498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.835611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.835638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.835779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.835809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.835907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.835938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.836109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.836155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.836264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.836303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.836402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.836444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.836568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.836594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.836722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.836752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.836874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.836923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.837044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.837075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.837239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.837271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.837383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.837431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.837537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.837571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.837658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.837684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.837799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.837826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.837916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.837944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.838025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.838052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.838149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.838187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.838285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.838318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.838424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.838452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.838543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.838570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.838665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.838691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.838800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.838827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.838933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.838959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.610 [2024-12-05 14:02:01.839076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.610 [2024-12-05 14:02:01.839102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.610 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.839186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.839211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.839428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.839454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.839534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.839559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.839722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.839752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.839906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.839935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.840052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.840080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.840229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.840257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.840435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.840461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.840576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.840602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.840690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.840716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.840833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.840859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.840954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.840983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.841137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.841165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.841276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.841303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.841410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.841459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.841581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.841609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.841784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.841846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.841953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.841983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.842087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.842113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.842227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.842255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.842378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.842404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.842518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.842556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.842691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.842718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.842860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.842886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.842997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.843023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.843110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.843138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.843223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.843249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.843330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.843355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.843449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.843476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.843571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.843598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.843766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.843809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.843914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.843940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.844056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.844084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.844187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.844234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.844374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.844400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.844552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.844578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.844676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.844703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.844823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.844852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.844979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.845007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.845169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.845215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.611 [2024-12-05 14:02:01.845305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.611 [2024-12-05 14:02:01.845331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.611 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.845440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.845467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.845552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.845578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.845668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.845696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.845828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.845861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.845984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.846011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.846142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.846168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.846279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.846305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.846404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.846443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.846532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.846562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.846676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.846702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.846843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.846871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.847015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.847062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.847153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.847180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.847271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.847299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.847411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.847447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.847561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.847588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.847670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.847696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.847798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.847828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.847931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.847960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.848095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.848123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.848217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.848243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.848322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.848348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.848436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.848462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.848577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.848603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.848692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.848718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.848844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.848872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.848986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.849012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.849151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.849191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.849288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.849317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.849441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.849469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.849553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.849597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.849696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.849726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.849815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.849843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.849963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.849992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.850122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.850153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.850288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.850315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.850430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.850457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.850544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.850570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.850654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.850679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.850788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.850814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.850892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.850920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.851039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.851065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.851181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 14:02:01.851210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.612 qpair failed and we were unable to recover it. 00:30:30.612 [2024-12-05 14:02:01.851313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.851339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.851448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.851477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.851594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.851621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.851735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.851765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.851880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.851909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.852086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.852130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.852221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.852249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.852367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.852398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.852529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.852555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.852676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.852718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.852826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.852852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.853001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.853032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.853191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.853217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.853332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.853358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.853448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.853481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.853580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.853606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.853686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.853716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.853833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.853859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.854016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.854043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.854159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.854187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.854313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.854341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.854481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.854510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.854629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.854656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.854742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.854769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.854848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.854874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.854963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.854989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.855071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.855097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.613 [2024-12-05 14:02:01.855208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 14:02:01.855234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.613 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.855331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.855369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.855471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.855510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.855657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.855685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.855775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.855800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.855902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.855928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.856013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.856039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.856175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.856205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.856340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.856365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.856493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.856520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.856630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.856660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.856765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.856794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.856919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.856960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.857047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.857076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.857162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.857192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.857342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.857386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.857511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.857544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.857673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.857711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.857826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.857876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.858034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.858066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.858193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.858220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.858369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.858396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.858534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.858561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.858653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.858679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.858801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.858827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.858922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.858969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.859118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.859150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.859292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.859319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.859410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.859443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.859524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.859552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.859657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.859685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.859797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.859823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.859942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.859967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.860058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.860083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.860201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.860226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.860311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 14:02:01.860339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.614 qpair failed and we were unable to recover it. 00:30:30.614 [2024-12-05 14:02:01.860440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.860468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.860580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.860611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.860748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.860774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.860873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.860899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.860985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.861029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.861136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.861164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.861281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.861306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.861413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.861465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.861592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.861619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.861726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.861754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.861849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.861878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.861961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.861990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.862125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.862158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.862268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.862294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.862426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.862454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.862541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.862568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.862663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.862693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.862820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.862850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.862962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.863006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.863112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.863141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.863262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.863298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.863453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.863480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.863567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.863592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.863719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.863745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.863887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.863918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.864028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.864053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.864167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.864197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.864300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.864325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.864438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.864465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.864560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.864590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.864742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.864771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.864901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.864930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.865051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.865081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.865201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.865260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.865427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.865473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.865599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.865627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.865713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.865738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.865896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.615 [2024-12-05 14:02:01.865939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.615 qpair failed and we were unable to recover it. 00:30:30.615 [2024-12-05 14:02:01.866049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.866093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.866181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.866206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.866317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.866342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.866435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.866462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.866555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.866583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.866681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.866707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.866811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.866837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.866928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.866955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.867036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.867062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.867207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.867233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.867357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.867384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.867505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.867544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.867688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.867717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.867810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.867837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.867949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.867978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.868132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.868163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.868305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.868333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.868414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.868449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.868543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.868569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.868681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.868706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.868787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.868814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.868905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.868951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.869068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.869112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.869265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.869310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.869430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.869460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.869567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.869593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.869703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.869750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.869859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.869899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.870006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.870038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.870138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.870178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.870288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.870320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.870458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.870485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.870585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.870610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.870699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.870724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.870810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.870836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.870956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.870985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.871140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.871175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.871288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.616 [2024-12-05 14:02:01.871315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.616 qpair failed and we were unable to recover it. 00:30:30.616 [2024-12-05 14:02:01.871400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.871433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.871518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.871543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.871672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.871710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.871855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.871902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.872009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.872038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.872145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.872171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.872287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.872314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.872436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.872463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.872548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.872576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.872661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.872686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.872766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.872792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.872882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.872912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.873013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.873039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.873127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.873152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.873237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.873262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.873345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.873370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.873506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.873545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.873666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.873694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.873812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.873839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.873931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.873958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.874047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.874076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.874173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.874202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.874335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.874362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.874461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.874487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.874575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.874620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.874751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.874783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.874874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.874908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.875054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.875083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.875180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.875222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.875304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.875330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.875410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.875443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.875573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.875600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.875710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.875739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.875842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.875867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.876052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.876085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.876225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.876259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.876359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.617 [2024-12-05 14:02:01.876385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.617 qpair failed and we were unable to recover it. 00:30:30.617 [2024-12-05 14:02:01.876501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.876530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.876626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.876653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.876767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.876793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.876887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.876913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.877043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.877074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.877218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.877244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.877338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.877363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.877461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.877487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.877602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.877628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.877704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.877729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.877871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.877913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.878057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.878090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.878238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.878266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.878351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.878379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.878503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.878530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.878648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.878675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.878758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.878784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.878921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.878952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.879079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.879111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.879292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.879347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.879465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.879505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.879630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.879657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.879764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.879811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.879900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.879924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.880036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.880082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.880198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.880224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.880317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.880346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.880439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.880466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.880545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.880577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.880665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.880691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.880798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.880824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.880910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.880936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.881025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.881052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.618 [2024-12-05 14:02:01.881181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.618 [2024-12-05 14:02:01.881219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.618 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.881311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.881339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.881437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.881466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.881556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.881581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.881681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.881712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.881818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.881845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.881935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.881961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.882052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.882079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.882167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.882195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.882282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.882307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.882428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.882458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.882549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.882575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.882667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.882697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.882794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.882820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.882909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.882936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.883019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.883044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.883163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.883191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.883276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.883301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.883385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.883410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.883506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.883548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.883680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.883711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.883806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.883838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.883951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.884001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.884133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.884158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.884250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.884278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.884389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.884414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.884513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.884539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.884616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.884641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.884773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.884816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.884913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.884944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.885048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.885075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.885193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.885221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.885303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.885329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.885425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.885451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.885539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.885565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.885651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.885703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.885828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.885858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.885969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.885999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.619 [2024-12-05 14:02:01.886095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.619 [2024-12-05 14:02:01.886125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.619 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.886235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.886261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.886356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.886384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.886494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.886534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.886656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.886684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.886800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.886826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.886959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.886989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.887089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.887114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.887229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.887254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.887370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.887396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.887505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.887543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.887645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.887673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.887787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.887813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.887924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.887954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.888055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.888083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.888177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.888204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.888297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.888324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.888423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.888449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.888540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.888567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.888658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.888684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.888775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.888801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.888914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.888940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.889022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.889048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.889164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.889190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.889277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.889308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.889392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.889426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.889509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.889536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.889646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.889672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.889784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.889812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.889898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.889925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.890023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.890049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.890161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.890187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.890296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.890323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.890455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.890495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.890590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.890617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.890765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.890800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.890931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.620 [2024-12-05 14:02:01.890968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.620 qpair failed and we were unable to recover it. 00:30:30.620 [2024-12-05 14:02:01.891075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.891106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.891233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.891258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.891336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.891362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.891463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.891489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.891583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.891609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.891692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.891717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.891849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.891894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.892004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.892029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.892115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.892140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.892225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.892252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.892349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.892374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.892475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.892503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.892599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.892630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.892725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.892751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.892854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.892881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.892996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.893022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.893110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.893135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.893221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.893247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.893343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.893369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.893496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.893522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.893611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.893637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.893734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.893760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.893853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.893880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.893991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.894017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.894107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.894133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.894214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.894242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.894352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.894377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.894501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.894533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.894623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.894651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.894749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.894775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.894901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.894926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.895079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.895105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.895304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.895338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.895500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.895527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.895619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.895645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.895759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.895784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.895873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.895900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.896009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.621 [2024-12-05 14:02:01.896036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.621 qpair failed and we were unable to recover it. 00:30:30.621 [2024-12-05 14:02:01.896176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.896221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.896326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.896352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.896445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.896472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.896554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.896580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.896684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.896745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.896893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.896926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.897050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.897090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.897189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.897217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.897334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.897361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.897455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.897485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.897607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.897634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.897757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.897804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.897887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.897913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.898008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.898039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.898169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.898194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.898297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.898323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.898470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.898500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.898587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.898613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.898718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.898744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.898827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.898853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.898929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.898954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.899065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.899092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.899174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.899201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.899299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.899329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.899423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.899449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.899541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.899569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.899687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.899712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.899807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.899834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.899956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.899981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.900068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.900095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.900219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.900245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.900391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.900422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.900504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.622 [2024-12-05 14:02:01.900531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.622 qpair failed and we were unable to recover it. 00:30:30.622 [2024-12-05 14:02:01.900624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.900650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.900732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.900757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.900862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.900888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.900996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.901022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.901116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.901141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.901245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.901271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.901384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.901410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.901525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.901551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.901671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.901698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.901803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.901830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.901949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.901975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.902065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.902091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.902173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.902199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.902309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.902335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.902429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.902457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.902547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.902572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.902662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.902696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.902786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.902813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.902909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.902935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.903050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.903075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.903160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.903186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.903266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.903292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.903478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.903517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.903653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.903709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.903813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.903841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.903932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.903959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.904058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.904085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.904180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.904207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.904321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.904347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.904440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.904469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.904577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.904616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.904715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.904742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.904832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.904858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.904970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.904996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.905139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.905165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.905255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.905281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.905359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.905388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.905492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.623 [2024-12-05 14:02:01.905522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.623 qpair failed and we were unable to recover it. 00:30:30.623 [2024-12-05 14:02:01.905619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.905646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.905751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.905783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.905883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.905914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.906057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.906091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.906199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.906225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.906340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.906366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.906479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.906506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.906591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.906617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.906746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.906772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.906885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.906911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.907019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.907046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.907134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.907159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.907260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.907299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.907396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.907432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.907533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.907559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.907659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.907690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.907784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.907814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.907911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.907941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.908065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.908095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.908232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.908264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.908405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.908444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.908552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.908578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.908685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.908714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.908814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.908844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.908959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.909007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.909114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.909147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.909235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.909261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.909349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.909376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.909517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.909565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.909680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.909707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.909821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.909847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.909959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.909986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.910100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.910126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.910214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.910240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.910355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.910384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.910483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.910509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.910600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.910626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.910748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.624 [2024-12-05 14:02:01.910775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.624 qpair failed and we were unable to recover it. 00:30:30.624 [2024-12-05 14:02:01.910891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.910918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.911016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.911045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.911139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.911167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.911256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.911281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.911392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.911422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.911519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.911549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.911636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.911661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.911738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.911765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.911886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.911913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.911999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.912024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.912110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.912136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.912227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.912252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.912370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.912396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.912523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.912550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.912666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.912694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.912822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.912861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.912952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.912979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.913098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.913124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.913208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.913233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.913320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.913346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.913458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.913485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.913597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.913622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.913705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.913730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.913810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.913835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.913908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.913934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.914076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.914101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.914216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.914244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.914324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.914355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.914500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.914526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.914625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.914653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.914773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.914803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.914929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.914973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.915081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.915107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.915198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.915226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.915348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.915379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.915506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.915534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.915632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.915658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.915772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.625 [2024-12-05 14:02:01.915798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.625 qpair failed and we were unable to recover it. 00:30:30.625 [2024-12-05 14:02:01.915907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.915933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.916054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.916080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.916166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.916191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.916283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.916315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.916395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.916428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.916527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.916553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.916654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.916680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.916779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.916810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.916962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.917006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.917147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.917192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.917308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.917335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.917426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.917453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.917543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.917568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.917650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.917675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.917757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.917784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.917877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.917901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.917986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.918014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.918133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.918172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.918264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.918290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.918380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.918408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.918536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.918562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.918657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.918687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.918800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.918826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.918935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.918961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.919077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.919104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.919219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.919245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.919332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.919360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.919492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.919523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.919624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.919650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.919756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.919785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.919941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.919971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.920088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.920116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.920203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.920231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.920368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.920395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.920493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.920520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.920613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.920640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.920729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.920754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.920872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.920916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.626 qpair failed and we were unable to recover it. 00:30:30.626 [2024-12-05 14:02:01.921050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.626 [2024-12-05 14:02:01.921094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.921180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.921206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.921284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.921311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.921428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.921454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.921530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.921554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.921635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.921661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.921746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.921771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.921895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.921925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.922029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.922057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.922163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.922189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.922275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.922301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.922381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.922408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.922528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.922566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.922657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.922685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.922783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.922812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.922920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.922946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.923036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.923062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.923164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.923190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.923285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.923315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.923406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.923446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.923572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.923598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.923716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.923742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.923834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.923860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.923957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.923982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.924057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.924083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.924203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.924229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.924318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.924342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.924436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.924461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.924545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.924569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.924712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.924738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.924846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.924872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.924963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.924988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.925112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.925137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.925245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.627 [2024-12-05 14:02:01.925271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.627 qpair failed and we were unable to recover it. 00:30:30.627 [2024-12-05 14:02:01.925379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.925404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.925507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.925532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.925617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.925644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.925732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.925757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.925837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.925862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.925946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.925971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.926060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.926086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.926181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.926210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.926315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.926341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.926426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.926451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.926536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.926560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.926646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.926674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.926761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.926788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.926924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.926950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.927033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.927058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.927171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.927197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.927281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.927306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.927411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.927445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.927546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.927574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.927668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.927694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.927808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.927837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.927931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.927959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.928088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.928114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.928205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.928232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.928311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.928337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.928433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.928460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.928568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.928595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.928679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.928703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.928813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.928839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.928942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.928971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.929063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.929088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.929198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.929224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.929335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.929361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.929498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.929537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.929635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.929663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.929751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.929778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.929916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.929943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.930071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.628 [2024-12-05 14:02:01.930111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.628 qpair failed and we were unable to recover it. 00:30:30.628 [2024-12-05 14:02:01.930254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.930292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.930379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.930406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.930517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.930543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.930621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.930646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.930759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.930785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.930878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.930906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.931028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.931058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.931156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.931185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.931274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.931300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.931387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.931412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.931507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.931532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.931642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.931667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.931741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.931765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.931889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.931923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.932082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.932112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.932211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.932238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.932348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.932388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.932502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.932548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.932670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.932701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.932821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.932850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.932968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.932996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.933115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.933141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.933319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.933344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.933437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.933483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.933584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.933610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.933744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.933771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.933871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.933899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.934000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.934029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.934143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.934172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.934265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.934291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.934428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.934456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.934568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.934595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.934682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.934707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.934808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.934835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.934965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.934993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.935144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.935183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.935304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.935332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.935455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.935483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.629 qpair failed and we were unable to recover it. 00:30:30.629 [2024-12-05 14:02:01.935594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.629 [2024-12-05 14:02:01.935620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.935719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.935754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.935854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.935903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.935995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.936023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.936152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.936180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.936283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.936310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.936396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.936431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.936547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.936580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.936673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.936700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.936821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.936849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.937053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.937081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.937201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.937229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.937336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.937361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.937449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.937477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.937595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.937622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.937702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.937728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.937846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.937873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.937994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.938021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.938143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.938174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.938310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.938337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.938420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.938446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.938559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.938588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.938718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.938743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.938826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.938851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.938966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.938994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.939095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.939123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.939207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.939235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.939320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.939345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.939458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.939485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.939600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.939628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.939721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.939747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.939876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.939903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.940000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.940025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.940118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.940143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.940227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.630 [2024-12-05 14:02:01.940252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.630 qpair failed and we were unable to recover it. 00:30:30.630 [2024-12-05 14:02:01.940337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.940364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.940451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.940477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.940569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.940597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.940711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.940738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.940860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.940888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.940993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.941021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.941121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.941147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.941225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.941257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.941380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.941408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.941500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.941526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.941617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.941641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.941781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.941806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.941954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.941979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.942095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.942122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.942212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.942238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.942333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.942360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.942458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.942484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.942576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.942601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.942738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.942764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.942869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.942911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.943023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.943049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.943133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.943159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.943256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.943280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.943366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.943395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.943492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.943519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.943604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.943630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.943728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.943754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.943870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.943897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.944010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.944037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.944157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.944184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.944266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.944291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.944411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.944443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.944534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.944559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.631 [2024-12-05 14:02:01.944674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.631 [2024-12-05 14:02:01.944700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.631 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.944784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.944813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.944898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.944925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.945005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.945031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.945110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.945135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.945260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.945288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.945384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.945410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.945503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.945529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.945614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.945639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.945721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.945745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.945863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.945888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.946006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.946034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.946121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.946148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.946231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.946257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.946338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.946365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.946493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.946533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.946653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.946691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.946846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.946873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.946988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.947013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.947100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.947124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.947212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.947237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.947319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.947345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.947452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.947477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.947562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.947587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.947723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.947750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.947836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.947861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.947940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.947964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.948080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.948106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.948201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.948225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.948389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.948438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.948556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.948583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.948696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.948723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.948813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.948837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.948953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.948979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.949063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.949088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.949202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.632 [2024-12-05 14:02:01.949241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.632 qpair failed and we were unable to recover it. 00:30:30.632 [2024-12-05 14:02:01.949335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.949363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.949449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.949474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.949585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.949612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.949705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.949732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.949822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.949849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.949940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.949971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.950113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.950139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.950243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.950281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.950380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.950408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.950553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.950579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.950686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.950712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.950859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.950885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.950975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.950999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.951086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.951112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.951192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.951218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.951312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.951337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.951459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.951487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.951572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.951598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.951707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.951746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.951874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.951901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.951988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.952014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.952106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.952132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.952246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.952272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.952360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.952390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.952509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.952536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.952658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.952689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.952773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.952800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.952922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.952948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.953032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.953058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.953148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.953174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.953257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.953285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.953377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.953402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.953516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.953543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.953631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.953656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.953763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.953789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.953872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.953896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.954019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.954045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.954137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.633 [2024-12-05 14:02:01.954164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.633 qpair failed and we were unable to recover it. 00:30:30.633 [2024-12-05 14:02:01.954246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.954273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.954366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.954393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.954515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.954542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.954658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.954683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.954764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.954790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.954896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.954922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.955036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.955063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.955151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.955183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.955300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.955326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.955429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.955457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.955573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.955598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.955711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.955737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.955850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.955875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.955950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.955974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.956081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.956108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.956227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.956255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.956339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.956365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.956454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.956480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.956570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.956595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.956702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.956727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.956814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.956839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.956957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.956984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.957100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.957127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.957241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.957268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.957367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.957393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.957500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.957539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.957625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.957653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.957767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.957793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.957888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.957914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.957999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.958025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.958125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.958163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.958279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.958306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.958405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.958448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.958541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.958566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.958657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.958684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.958799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.958826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.958928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.958957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.959071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.634 [2024-12-05 14:02:01.959096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.634 qpair failed and we were unable to recover it. 00:30:30.634 [2024-12-05 14:02:01.959214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.959241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.959379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.959404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.959528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.959555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.959643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.959668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.959763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.959788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.959874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.959899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.959983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.960008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.960092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.960118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.960211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.960237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.960318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.960352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.960444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.960471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.960566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.960593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.960672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.960698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.960779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.960804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.960894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.960919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.961029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.961056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.961149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.961174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.961289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.961313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.961393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.961425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.961507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.961533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.961634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.961673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.961772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.961801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.961915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.961941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.962071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.962097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.962187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.962215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.962312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.962350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.962443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.962472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.962585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.962611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.962701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.962727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.962822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.962848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.962975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.963004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.963126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.963154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.963246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.963276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.963364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.963392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.963515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.963542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.963632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.963658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.963757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.963788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.963906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-05 14:02:01.963932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.635 qpair failed and we were unable to recover it. 00:30:30.635 [2024-12-05 14:02:01.964050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.964078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.964192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.964220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.964329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.964355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.964438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.964465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.964553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.964578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.964662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.964689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.964766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.964792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.964878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.964904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.965010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.965036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.965125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.965152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.965237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.965265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.965349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.965375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.965476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.965503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.965586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.965614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.965747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.965773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.965853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.965880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.965991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.966031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.966186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.966224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.966312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.966339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.966429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.966456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.966541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.966568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.966662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.966688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.966776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.966801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.966891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.966920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.967001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.967027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.967126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.967154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.967243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.967270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.967409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.967442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.967526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.967552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.967641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.967667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.967747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.967773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.967863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.967888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.967974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.968000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.636 [2024-12-05 14:02:01.968124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-05 14:02:01.968158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.636 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.968248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.968275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.968370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.968396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.968525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.968551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.968663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.968689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.968776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.968807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.968915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.968941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.969060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.969089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.969204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.969230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.969326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.969355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.969484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.969511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.969603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.969630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.969719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.969745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.969836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.969863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.969947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.969975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.970089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.970116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.970199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.970226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.970345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.970374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.970498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.970528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.970684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.970711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.970796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.970822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.970913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.970939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.971029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.971062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.971181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.971206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.971291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.971316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.971435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.971464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.971554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.971581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.971667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.971694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.971835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.971861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.971962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.972003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.972137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.972167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.972317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.972345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.972482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.972522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.972672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.972700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.972807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.972835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.972946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.972973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.973080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.973107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.973212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.973238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.637 [2024-12-05 14:02:01.973329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-05 14:02:01.973357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.637 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.973444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.973471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.973549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.973576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.973669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.973700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.973841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.973868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.973990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.974017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.974122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.974150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.974268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.974294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.974413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.974445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.974539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.974565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.974655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.974682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.974771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.974797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.974890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.974916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.974999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.975026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.975159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.975199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.975314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.975342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.975481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.975520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.975620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.975649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.975787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.975813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.975901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.975928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.976038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.976064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.976194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.976223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.976311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.976340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.976439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.976467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.976578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.976604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.976685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.976710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.976816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.976842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.976922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.976947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.977044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.977069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.977161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.977187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.977305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.977331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.977423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.977449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.977532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.977558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.977642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.977668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.977746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.977777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.977894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.977920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.978013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.978039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.978170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.978209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.978331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.638 [2024-12-05 14:02:01.978360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.638 qpair failed and we were unable to recover it. 00:30:30.638 [2024-12-05 14:02:01.978441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.978468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.978562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.978589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.978698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.978725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.978811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.978837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.978939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.978966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.979057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.979082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.979188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.979228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.979351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.979378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.979479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.979505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.979593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.979620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.979715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.979742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.979860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.979886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.979967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.979994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.980087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.980117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.980212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.980238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.980319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.980345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.980436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.980463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.980604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.980630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.980763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.980791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.980877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.980905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.980994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.981022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.981118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.981147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.981300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.981339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.981473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.981503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.981587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.981615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.981711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.981737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.981838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.981881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.981987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.982014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.982110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.982136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.982228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.982255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.982383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.982428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.982549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.982578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.982687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.982713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.982826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.982852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.982993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.983019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.983155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.983186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.983278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.983304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.983432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.639 [2024-12-05 14:02:01.983471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.639 qpair failed and we were unable to recover it. 00:30:30.639 [2024-12-05 14:02:01.983560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.983588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.983729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.983773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.983861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.983888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.984037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.984081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.984201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.984230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.984322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.984350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.984450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.984476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.984567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.984592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.984679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.984704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.984823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.984851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.984939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.984966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.985091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.985119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.985244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.985270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.985349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.985377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.985477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.985503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.985588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.985614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.985730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.985757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.985847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.985876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.985965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.985993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.986089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.986127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.986279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.986307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.986426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.986453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.986550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.986577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.986662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.986689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.986790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.986821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.986948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.986978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.987097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.987124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.987223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.987249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.987330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.987354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.987479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.987507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.987624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.987649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.987748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.987774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.987864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.987891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.987985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.988014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.988140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.988168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.988278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.988304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.988413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.988445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.988549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.640 [2024-12-05 14:02:01.988582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.640 qpair failed and we were unable to recover it. 00:30:30.640 [2024-12-05 14:02:01.988712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.988738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.988829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.988857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.988944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.988972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.989068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.989097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.989182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.989208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.989324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.989354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.989443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.989470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.989566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.989592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.989679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.989723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.989857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.989891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.990041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.990070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.990176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.990205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.990335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.990374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.990488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.990516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.990609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.990636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.990765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.990793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.990912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.990940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.991055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.991082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.991209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.991235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.991333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.991362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.991475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.991504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.991587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.991614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.991709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.991737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.991888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.991931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.992030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.992059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.992211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.992238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.992327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.992358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.992479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.992524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.992621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.992647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.992757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.992782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.992874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.992900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.992980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.993008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.993097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.993123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.993212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.641 [2024-12-05 14:02:01.993238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.641 qpair failed and we were unable to recover it. 00:30:30.641 [2024-12-05 14:02:01.993352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.993378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.993532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.993561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.993678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.993710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.993829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.993856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.993947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.993972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.994066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.994104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.994210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.994237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.994349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.994375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.994508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.994535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.994632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.994658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.994789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.994816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.994909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.994935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.995090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.995141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.995240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.995268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.995354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.995380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.995485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.995512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.995599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.995626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.995769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.995812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.995908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.995935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.996027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.996054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.996132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.996158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.996271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.996297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.996380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.996405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.996535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.996564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.996696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.996726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.996875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.996920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.997036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.997079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.997157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.997183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.997271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.997300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.997414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.997454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.997573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.997599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.997676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.997702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.997789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.997820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.997899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.997925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.998041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.998066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.998178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.998203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.998283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.998309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.998399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.642 [2024-12-05 14:02:01.998433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.642 qpair failed and we were unable to recover it. 00:30:30.642 [2024-12-05 14:02:01.998516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:01.998542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:01.998653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:01.998680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:01.998802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:01.998827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:01.998911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:01.998938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:01.999043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:01.999069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:01.999174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:01.999198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:01.999289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:01.999314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:01.999404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:01.999437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:01.999525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:01.999549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:01.999629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:01.999655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:01.999745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:01.999771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:01.999891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:01.999916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:01.999997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.000022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.000125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.000151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.000256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.000282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.000374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.000400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.000534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.000560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.000645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.000671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.000754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.000780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.000891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.000918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.001037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.001064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.001161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.001190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.001281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.001306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.001393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.001427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.001512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.001537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.001618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.001645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.001733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.001758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.001886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.001915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.002002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.002031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.002206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.002263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.002391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.002432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.002560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.002587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.002688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.002715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.002821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.002851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.002973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.003024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.003167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.003193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.003275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.003302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.003388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.003415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.643 qpair failed and we were unable to recover it. 00:30:30.643 [2024-12-05 14:02:02.003540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.643 [2024-12-05 14:02:02.003567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.003662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.003688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.003800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.003826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.003916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.003942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.004030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.004058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.004155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.004194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.004281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.004308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.004396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.004431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.004533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.004564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.004656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.004682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.004770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.004816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.004938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.004968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.005067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.005093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.005176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.005202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.005310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.005336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.005439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.005467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.005555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.005583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.005694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.005720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.005807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.005833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.005910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.005936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.006071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.006097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.006184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.006210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.006320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.006348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.006435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.006473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.006573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.006614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.006731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.006760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.006883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.006911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.006999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.007027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.007123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.007151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.007241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.007270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.007477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.007503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.007695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.007742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.007873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.007914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.008022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.008066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.008187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.008216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.008335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.008374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.008505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.008545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.008648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.008678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.644 [2024-12-05 14:02:02.008789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.644 [2024-12-05 14:02:02.008816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.644 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.008976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.009006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.009117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.009160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.009268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.009293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.009441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.009480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.009578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.009606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.009773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.009803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.009921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.009950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.010068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.010097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.010208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.010233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.010317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.010342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.010469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.010496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.010590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.010616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.010693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.010719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.010830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.010856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.010929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.010956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.011078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.011106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.011217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.011256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.011391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.011438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.011548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.011576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.011667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.011694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.011801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.011829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.011956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.011984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.012074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.012104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.012203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.012233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.012367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.012397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.012525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.012551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.012692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.012725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.012880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.012925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.013040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.013067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.013184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.013211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.013299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.013327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.013431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.013461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.013543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.013569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.645 qpair failed and we were unable to recover it. 00:30:30.645 [2024-12-05 14:02:02.013654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.645 [2024-12-05 14:02:02.013683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.013812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.013838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.013952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.013978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.014078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.014106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.014218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.014250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.014356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.014384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.014481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.014508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.014618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.014644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.014745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.014775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.014875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.014902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.015001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.015027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.015141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.015167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.015253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.015280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.015362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.015388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.015490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.015516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.015604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.015630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.015719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.015745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.015827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.015853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.015968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.015997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.016114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.016141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.016231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.016258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.016364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.016391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.016520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.016559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.016679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.016707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.016822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.016849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.016943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.016968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.017057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.017101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.017236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.017263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.017374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.017399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.017519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.017545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.017631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.017673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.017802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.017838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.017940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.017970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.018076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.018107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.018224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.018253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.018373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.018400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.018502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.018531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.018616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.646 [2024-12-05 14:02:02.018642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.646 qpair failed and we were unable to recover it. 00:30:30.646 [2024-12-05 14:02:02.018768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.018813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.018972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.019018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.019107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.019133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.019242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.019281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.019373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.019400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.019529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.019557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.019642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.019668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.019801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.019828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.019969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.020000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.020111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.020158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.020308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.020346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.020442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.020471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.020557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.020583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.020673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.020700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.020810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.020837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.020930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.020957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.021052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.021079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.021193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.021220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.021308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.021334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.021428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.021455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.021569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.021599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.021684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.021710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.021831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.021857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.021967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.021993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.022112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.022141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.022234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.022260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.022346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.022374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.022472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.022500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.022585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.022612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.022695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.022722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.022837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.022864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.022964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.023003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.023123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.023150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.023232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.023257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.023347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.023373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.023515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.023553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.023693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.023725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.023948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.647 [2024-12-05 14:02:02.023989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.647 qpair failed and we were unable to recover it. 00:30:30.647 [2024-12-05 14:02:02.024119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.024150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.024260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.024286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.024372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.024399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.024485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.024512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.024631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.024664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.024789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.024819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.024999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.025032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.025146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.025175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.025297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.025324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.025428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.025456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.025547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.025573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.025663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.025690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.025776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.025802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.025891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.025918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.026028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.026055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.026148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.026187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.026316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.026342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.026438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.026466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.026549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.026575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.026675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.026704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.026844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.026874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.026977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.027003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.027081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.027113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.027229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.027256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.027351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.027378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.027504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.027531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.027616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.027642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.027758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.027785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.027876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.027902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.027986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.028011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.028127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.028156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.028239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.028264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.028373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.028412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.028545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.028573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.028658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.028702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.028792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.028829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.029060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.029091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.029194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.029226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.648 qpair failed and we were unable to recover it. 00:30:30.648 [2024-12-05 14:02:02.029333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.648 [2024-12-05 14:02:02.029379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.029511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.029539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.029641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.029680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.029798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.029831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.029962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.029989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.030097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.030144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.030230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.030257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.030333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.030359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.030473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.030500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.030578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.030604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.030686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.030712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.030799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.030827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.030911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.030938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.031030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.031056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.031165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.031192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.031280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.031306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.031395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.031430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.031527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.031553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.031660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.031686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.031764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.031790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.031883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.031912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.032000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.032025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.032122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.032151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.032265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.032291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.032366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.032397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.032495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.032522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.032632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.032659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.032739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.032765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.032856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.032883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.032960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.032986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.033096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.033121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.033206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.033231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.033324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.033349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.033460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.033487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.033574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.033599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.033716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.033744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.033827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.033852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.033969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.033997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.649 qpair failed and we were unable to recover it. 00:30:30.649 [2024-12-05 14:02:02.034093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.649 [2024-12-05 14:02:02.034119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.034228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.034254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.034339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.034365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.034492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.034519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.034656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.034682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.034765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.034792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.034912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.034940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.035066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.035104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.035218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.035246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.035360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.035386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.035526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.035565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.035696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.035725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.035835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.035862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.035949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.035984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.036117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.036156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.036247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.036274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.036374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.036401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.036492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.036519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.036612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.036639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.036783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.036813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.036972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.037001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.037118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.037148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.037259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.037286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.037380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.037406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.037528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.037553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.037708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.037738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.037841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.037868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.038003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.038033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.038131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.038157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.038245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.038274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.038382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.038435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.038536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.038564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.038649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.038675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.038791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.038821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.650 qpair failed and we were unable to recover it. 00:30:30.650 [2024-12-05 14:02:02.038915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.650 [2024-12-05 14:02:02.038947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.039057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.039084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.039182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.039211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.039304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.039331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.039444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.039471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.039557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.039584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.039683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.039711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.039793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.039819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.039927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.039954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.040048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.040073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.040161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.040187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.040304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.040332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.040458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.040486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.040591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.040622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.040739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.040769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.040867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.040893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.040996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.041026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.041134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.041161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.041288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.041317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.041441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.041478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.041574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.041600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.041755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.041787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.041890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.041921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.042030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.042076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.042179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.042211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.042324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.042351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.042442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.042469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.042560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.042585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.042748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.042793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.042906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.042933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.043053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.043080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.043175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.043213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.043305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.043332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.043451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.043481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.043572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.043597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.043688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.043714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.043801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.043826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.043911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.043936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.651 [2024-12-05 14:02:02.044021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.651 [2024-12-05 14:02:02.044049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.651 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.044133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.044160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.044253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.044287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.044440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.044467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.044557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.044582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.044696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.044722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.044829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.044871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.044967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.044995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.045092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.045129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.045220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.045262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.045350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.045375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.045481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.045508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.045618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.045642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.045718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.045743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.045879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.045924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.046061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.046106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.046213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.046238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.046351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.046377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.046479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.046519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.046640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.046667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.046777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.046803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.046952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.046984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.047111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.047156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.047286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.047328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.047469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.047496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.047582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.047607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.047698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.047740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.047833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.047861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.047954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.047982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.048079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.048108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.048266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.048299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.048438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.048465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.048584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.048609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.048698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.048723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.048829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.048859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.048969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-12-05 14:02:02.048994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.652 qpair failed and we were unable to recover it. 00:30:30.652 [2024-12-05 14:02:02.049079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.049103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.049186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.049210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.049320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.049344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.049477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.049502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.049592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.049616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.049708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.049737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.049847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.049871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.049963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.049988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.050125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.050150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.050227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.050251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.050369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.050393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.050515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.050540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.050623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.050652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.050769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.050800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.050905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.050930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.051017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.051042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.051123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.051150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.051233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.051259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.051375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.051402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.051532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.051558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.051644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.051670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.051806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.051832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.051937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.051962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.052042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.052068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.052163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.052188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.052294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.052320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.052404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.052438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.052555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.052579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.052661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.052685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.052770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.052795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.052903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.052927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.053009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.053034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.053144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.053169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.053276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.053301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.053406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.053455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.053576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.053603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.053684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.053709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.653 [2024-12-05 14:02:02.053802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-12-05 14:02:02.053828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.653 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.053947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.053972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.054071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.054096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.054178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.054205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.054310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.054346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.054485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.054522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.054612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.054637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.054774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.054800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.054888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.054914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.055035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.055062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.055161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.055187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.055277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.055302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.055396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.055432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.055535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.055562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.055710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.055737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.055872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.055919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.056003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.056030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.056122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.056150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.056265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.056291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.056378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.056403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.056491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.056517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.056674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.056701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.056848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.056874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.057017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.057043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.057170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.057195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.057285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.057309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.057388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.057425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.057552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.057578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.057683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.057709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.057809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.057834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.057949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.057975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.058056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-12-05 14:02:02.058082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-12-05 14:02:02.058169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.058196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.058311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.058338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.058448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.058475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.058602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.058630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.058724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.058748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.058830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.058855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.058975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.059002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.059103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.059141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.059264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.059291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.059385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.059411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.059506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.059532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.059644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.059671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.059777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.059804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.059912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.059937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.060059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.060096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.060219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.060248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.060355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.060383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.060489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.060516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.060601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.060628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.060701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.060726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.060840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.060868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.060973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.060999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.061090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.061116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.061231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.061264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.061387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.061414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.061503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.061529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.061606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.061630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.061726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.061751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.061836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.061860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.061955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.061979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.062063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.062087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.062203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.062227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.062340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.062368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.062472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.062510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.062612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.062640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.062732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.062759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.062871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.062913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-12-05 14:02:02.063039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.655 [2024-12-05 14:02:02.063065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.063216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.063242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.063326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.063353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.063447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.063474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.063553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.063579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.063683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.063710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.063801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.063829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.063956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.063983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.064076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.064102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.064218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.064262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.064345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.064371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.064465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.064492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.064606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.064633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.064719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.064746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.064876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.064920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.065002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.065045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.065170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.065195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.065305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.065330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.065410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.065443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.065559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.065586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.065668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.065695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-12-05 14:02:02.065811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.656 [2024-12-05 14:02:02.065837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.939 [2024-12-05 14:02:02.065956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.939 [2024-12-05 14:02:02.065984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.939 qpair failed and we were unable to recover it. 00:30:30.939 [2024-12-05 14:02:02.066100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.939 [2024-12-05 14:02:02.066127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.939 qpair failed and we were unable to recover it. 00:30:30.939 [2024-12-05 14:02:02.066212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.939 [2024-12-05 14:02:02.066239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.939 qpair failed and we were unable to recover it. 00:30:30.939 [2024-12-05 14:02:02.066340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.939 [2024-12-05 14:02:02.066367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.939 qpair failed and we were unable to recover it. 00:30:30.939 [2024-12-05 14:02:02.066456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.939 [2024-12-05 14:02:02.066491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.939 qpair failed and we were unable to recover it. 00:30:30.939 [2024-12-05 14:02:02.066588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.939 [2024-12-05 14:02:02.066614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.939 qpair failed and we were unable to recover it. 00:30:30.939 [2024-12-05 14:02:02.066732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.939 [2024-12-05 14:02:02.066761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.939 qpair failed and we were unable to recover it. 00:30:30.939 [2024-12-05 14:02:02.066874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.939 [2024-12-05 14:02:02.066900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.939 qpair failed and we were unable to recover it. 00:30:30.939 [2024-12-05 14:02:02.067018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.939 [2024-12-05 14:02:02.067045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.939 qpair failed and we were unable to recover it. 00:30:30.939 [2024-12-05 14:02:02.067131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.939 [2024-12-05 14:02:02.067156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.939 qpair failed and we were unable to recover it. 00:30:30.939 [2024-12-05 14:02:02.067259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.939 [2024-12-05 14:02:02.067285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.067430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.067468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.067559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.067586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.067673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.067698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.067799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.067824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.067933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.067959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.068083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.068111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.068190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.068215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.068329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.068355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.068442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.068467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.068575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.068602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.068697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.068723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.068808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.068834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.068946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.068975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.069097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.069128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.069216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.069243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.069366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.069391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.069501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.069528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.069632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.069659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.069761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.069788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.069930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.069972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.070113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.070148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.070243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.070269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.070403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.070442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.070538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.070562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.070678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.070706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.070801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.070832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.070927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.070957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.071057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.071082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.071167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.071195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.071314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.071341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.071449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.071476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.071572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.071598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.940 [2024-12-05 14:02:02.071689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.940 [2024-12-05 14:02:02.071715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.940 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.071806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.071833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.071919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.071945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.072058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.072085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.072195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.072224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.072307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.072335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.072422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.072447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.072533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.072558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.072641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.072667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.072791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.072816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.072898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.072923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.073017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.073042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.073125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.073150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.073237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.073264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.073352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.073377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.073483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.073509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.073596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.073621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.073744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.073769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.073883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.073910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.074003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.074028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.074109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.074137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.074236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.074275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.074362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.074391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.074508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.074536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.074623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.074650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.074776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.074802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.074894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.074920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.075015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.075050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.075176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.075212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.075333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.075360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.075474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.075506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.075587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.075613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.075732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.075758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.075832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.075858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.075974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.941 [2024-12-05 14:02:02.076001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.941 qpair failed and we were unable to recover it. 00:30:30.941 [2024-12-05 14:02:02.076090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.076116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.076234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.076265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.076400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.076449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.076543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.076570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.076649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.076673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.076760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.076786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.076876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.076902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.076993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.077021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.077129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.077155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.077246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.077286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.077381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.077410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.077544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.077572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.077664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.077691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.077781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.077807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.077898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.077923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.078004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.078028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.078107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.078132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.078241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.078269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.078361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.078394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.078500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.078538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.078636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.078663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.078755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.078785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.078903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.078929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.079016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.079042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.079136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.079166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.079247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.079271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.079350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.079375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.079511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.079541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.079637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.079663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.079758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.079784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.079875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.079903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.080000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.080027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.080135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.080161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.080274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.080305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.942 [2024-12-05 14:02:02.080435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.942 [2024-12-05 14:02:02.080462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.942 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.080578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.080604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.080691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.080717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.080836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.080861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.080973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.081000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.081088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.081113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.081205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.081230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.081320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.081345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.081448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.081475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.081562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.081587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.081701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.081725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.081833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.081859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.081959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.081997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.082122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.082150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.082277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.082304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.082443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.082471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.082593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.082620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.082703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.082731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.082825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.082852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.082933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.082960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.083049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.083077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.083167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.083197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.083291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.083322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.083424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.083454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.083535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.083560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.083648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.083672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.083765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.083794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.083912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.083937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.084021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.084047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.084198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.084226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.084343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.084372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.084489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.084532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.084626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.084655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.084840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.084867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.085010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.085037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.085160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.943 [2024-12-05 14:02:02.085186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.943 qpair failed and we were unable to recover it. 00:30:30.943 [2024-12-05 14:02:02.085275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.085303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.085445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.085485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.085607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.085634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.085754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.085782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.085885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.085912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.085998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.086024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.086133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.086160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.086253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.086280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.086369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.086394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.086513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.086540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.086657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.086683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.086772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.086797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.086885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.086910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.086991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.087017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.087109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.087134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.087241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.087269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.087380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.087407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.087515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.087554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.087661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.087689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.087807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.087833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.087946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.087972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.088071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.088099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.088189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.088216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.088309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.088336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.088453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.088480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.088599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.088626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.088709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.088734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.944 qpair failed and we were unable to recover it. 00:30:30.944 [2024-12-05 14:02:02.088828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.944 [2024-12-05 14:02:02.088854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.088940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.088967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.089058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.089085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.089163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.089192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.089327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.089353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.089457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.089483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.089601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.089625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.089708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.089733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.089808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.089833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.089975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.090000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.090091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.090117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.090235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.090261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.090346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.090371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.090471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.090499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.090618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.090644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.090731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.090759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.090850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.090877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.090973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.091000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.091143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.091170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.091272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.091311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.091431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.091458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.091576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.091603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.091696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.091721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.091814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.091839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.091945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.091971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.092045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.092070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.092157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.092181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.092293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.092318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.092400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.092434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.092523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.092548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.092652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.092690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.092895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.092923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.093041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.093068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.945 qpair failed and we were unable to recover it. 00:30:30.945 [2024-12-05 14:02:02.093147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.945 [2024-12-05 14:02:02.093174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.093271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.093297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.093397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.093447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.093552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.093579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.093706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.093733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.093821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.093848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.093939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.093966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.094084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.094112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.094202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.094228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.094340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.094365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.094486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.094513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.094601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.094627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.094723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.094748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.094838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.094862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.094975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.095006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.095098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.095123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.095231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.095257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.095399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.095437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.095527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.095553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.095654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.095693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.095783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.095812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.095928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.095955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.096075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.096101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.096193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.096221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.096319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.096349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.096470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.096499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.096599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.096626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.096739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.096765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.096880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.096908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.096995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.097021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.097137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.097165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.097247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.097275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.097373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.097410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.097576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.097604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.946 [2024-12-05 14:02:02.097693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.946 [2024-12-05 14:02:02.097718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.946 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.097807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.097831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.097916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.097942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.098027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.098060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.098164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.098203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.098302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.098330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.098432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.098459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.098553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.098580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.098666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.098691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.098775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.098802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.098912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.098937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.099020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.099045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.099151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.099179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.099296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.099324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.099433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.099463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.099547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.099572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.099647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.099671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.099774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.099799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.099832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1323f30 (9): Bad file descriptor 00:30:30.947 [2024-12-05 14:02:02.099955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.099982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.100081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.100107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.100218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.100246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.100337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.100363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.100458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.100483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.100601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.100627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.100704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.100729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.100822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.100846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.100932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.100957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.101064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.101091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.101221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.101260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.101364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.101390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.101539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.101568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.101659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.101686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.101798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.101823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.101957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.101984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.102096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.102122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-12-05 14:02:02.102217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.947 [2024-12-05 14:02:02.102249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.102346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.102386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.102517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.102544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.102633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.102657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.102744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.102769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.102854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.102880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.102961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.102986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.103080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.103105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.103224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.103251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.103371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.103399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.103508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.103546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.103656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.103684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.103790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.103816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.103896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.103922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.104044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.104073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.104168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.104195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.104308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.104335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.104443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.104470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.104548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.104574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.104652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.104678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.104764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.104789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.104878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.104910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.104993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.105020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.105140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.105167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.105260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.105289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.105380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.105404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.105503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.105530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.105616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.105642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.105727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.105752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.105846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.105872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.105982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.106007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.106081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.106106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.106218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.106243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.106357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.106383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-12-05 14:02:02.106470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.948 [2024-12-05 14:02:02.106497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.106622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.106648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.106760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.106786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.106901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.106927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.107033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.107059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.107170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.107197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.107284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.107309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.107397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.107429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.107547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.107575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.107658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.107683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.107772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.107797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.107876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.107902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.108003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.108033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.108127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.108152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.108266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.108294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.108384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.108410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.108514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.108539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.108656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.108682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.108785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.108811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.108899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.108925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.109043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.109070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.109175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.109202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.109287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.109313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.109399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.109435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.109557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.109583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.109670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.109695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.109773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.109797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-12-05 14:02:02.109878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.949 [2024-12-05 14:02:02.109912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.110022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.110047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.110160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.110186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.110319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.110358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.110583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.110625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.110758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.110785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.110902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.110929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.111011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.111036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.111151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.111177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.111254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.111280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.111396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.111430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.111519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.111546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.111640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.111667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.111784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.111809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.111900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.111926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.112017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.112046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.112155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.112181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.112268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.112296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.112383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.112413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.112526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.112565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.112686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.112714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.112862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.112888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.113005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.113029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.113144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.113170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.113308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.113334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.113435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.113462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.113570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.113596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.113680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.113719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.113816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.113843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.113958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.113985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.114110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.114136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.114212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.114238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.114347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.114373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.114471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.950 [2024-12-05 14:02:02.114496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.950 qpair failed and we were unable to recover it. 00:30:30.950 [2024-12-05 14:02:02.114587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.114612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.114695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.114721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.114811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.114837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.114952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.114977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.115054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.115080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.115171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.115198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.115308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.115334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.115465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.115504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.115604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.115632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.115748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.115775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.115888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.115914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.116005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.116030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.116116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.116142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.116256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.116285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.116382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.116411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.116540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.116568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.116661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.116686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.116800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.116825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.116902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.116927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.117039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.117066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.117157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.117184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.117282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.117320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.117409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.117445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.117553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.117579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.117670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.117696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.117789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.117815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.117898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.117924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.118025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.118053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.118168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.118194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.118278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.118303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.118388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.118414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.118511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.118536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.118648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.951 [2024-12-05 14:02:02.118674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.951 qpair failed and we were unable to recover it. 00:30:30.951 [2024-12-05 14:02:02.118779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.118811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.118925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.118952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.119032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.119058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.119142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.119170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.119249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.119274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.119355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.119382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.119495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.119534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.119624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.119652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.119747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.119779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.119876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.119905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.120017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.120043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.120138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.120168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.120287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.120315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.120480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.120509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.120603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.120628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.120712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.120737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.120852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.120877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.121015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.121041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.121158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.121186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.121269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.121294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.121386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.121412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.121507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.121534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.121645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.121672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.121788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.121814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.121898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.121926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.122041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.122067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.122158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.122186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.122315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.122344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.122464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.122492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.122602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.122628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.122718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.122742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.122880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.122905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.123012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.123037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.123155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.123182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.123292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.123318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.123403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.952 [2024-12-05 14:02:02.123435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.952 qpair failed and we were unable to recover it. 00:30:30.952 [2024-12-05 14:02:02.123555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.123584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.123700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.123726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.123843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.123873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.123989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.124015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.124108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.124139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.124223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.124251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.124382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.124430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.124523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.124550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.124666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.124693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.124783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.124810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.124922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.124949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.125033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.125059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.125192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.125230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.125321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.125348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.125444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.125473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.125563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.125589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.125681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.125707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.125796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.125824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.125944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.125971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.126085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.126111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.126254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.126282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.126371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.126395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.126505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.126542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.126626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.126654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.126773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.126800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.126888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.126913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.127030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.127055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.127147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.127172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.127262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.127287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.127401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.127437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.127637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.127664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.127750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.127782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.127889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.953 [2024-12-05 14:02:02.127914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.953 qpair failed and we were unable to recover it. 00:30:30.953 [2024-12-05 14:02:02.128008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.128035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.128120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.128146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.128235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.128260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.128348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.128374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.128507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.128546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.128667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.128695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.128806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.128832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.128942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.128968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.129087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.129113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.129248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.129274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.129411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.129443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.129553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.129578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.129720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.129746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.129834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.129858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.129969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.129995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.130087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.130112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.130234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.130260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.130347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.130372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.130461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.130488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.130579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.130605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.130725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.130751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.130838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.130865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.130981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.131007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.131150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.131188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.131283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.131321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.131448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.131486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.131585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.131612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.131699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.131724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.131832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.131857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.131980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.132008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.132113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.132138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.132219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.132243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.132355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.954 [2024-12-05 14:02:02.132381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.954 qpair failed and we were unable to recover it. 00:30:30.954 [2024-12-05 14:02:02.132481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.132510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.132609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.132636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.132727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.132754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.132890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.132917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.133022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.133052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.133150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.133183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.133275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.133303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.133399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.133430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.133522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.133548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.133636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.133663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.133742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.133768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.133874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.133899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.134006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.134032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.134123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.134148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.134269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.134298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.134388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.134415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.134570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.134603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.134710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.134735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.134845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.134871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.134981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.135007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.135093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.135118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.135197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.135223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.135361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.135387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.135493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.135519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.135596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.135622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.135748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.135775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.135852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.135878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.955 qpair failed and we were unable to recover it. 00:30:30.955 [2024-12-05 14:02:02.135972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.955 [2024-12-05 14:02:02.135999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.136088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.136117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.136212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.136239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.136326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.136352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.136500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.136528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.136617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.136648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.136739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.136765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.136848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.136873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.136965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.136997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.137114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.137141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.137253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.137280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.137389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.137426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.137520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.137545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.137659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.137685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.137783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.137808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.137918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.137943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.138033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.138059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.138141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.138167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.138253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.138279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.138366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.138394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.138503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.138542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.138631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.138657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.138750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.138778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.138916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.138942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.139030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.139055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.139136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.139160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.139271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.139297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.139385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.139409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.139504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.139531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.139645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.139672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.139788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.139814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.139904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.139930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.140061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.140087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.140175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.140200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.140279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.956 [2024-12-05 14:02:02.140304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.956 qpair failed and we were unable to recover it. 00:30:30.956 [2024-12-05 14:02:02.140410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.140445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.140535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.140560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.140654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.140679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.140761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.140787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.140874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.140899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.141013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.141039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.141127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.141152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.141234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.141261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.141345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.141373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.141470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.141498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.141573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.141604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.141720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.141747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.141847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.141874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.141968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.141993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.142115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.142141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.142237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.142276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.142428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.142466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.142588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.142622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.142743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.142769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.142878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.142904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.142983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.143009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.143148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.143174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.143304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.143343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.143464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.143493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.143619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.143645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.143728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.143755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.143845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.143871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.143963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.143991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.144103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.144130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.144224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.144250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.144365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.144392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.144491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.144518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.144602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.144628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.144710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.957 [2024-12-05 14:02:02.144735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.957 qpair failed and we were unable to recover it. 00:30:30.957 [2024-12-05 14:02:02.144827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.144853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.144963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.144990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.145102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.145130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.145257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.145285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.145405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.145438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.145553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.145579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.145657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.145682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.145796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.145822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.145902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.145929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.146016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.146041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.146169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.146212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.146301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.146328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.146447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.146475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.146567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.146595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.146716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.146742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.146824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.146850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.146961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.146993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.147086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.147113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.147203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.147230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.147321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.147347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.147459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.147487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.147604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.147631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.147745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.147772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.147864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.147890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.147976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.148003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.148149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.148176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.148300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.148330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.148472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.148499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.148612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.148639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.148752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.148778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.148893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.148918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.149026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.149053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.149140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.149167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.149258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.149285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.958 [2024-12-05 14:02:02.149399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.958 [2024-12-05 14:02:02.149431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.958 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.149521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.149547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.149693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.149719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.149808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.149833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.149912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.149938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.150023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.150050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.150173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.150213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.150357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.150386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.150523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.150552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.150646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.150673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.150787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.150812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.150928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.150954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.151069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.151094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.151187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.151215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.151329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.151356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.151448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.151476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.151563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.151590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.151669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.151696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.151805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.151832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.151941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.151967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.152045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.152071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.152172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.152212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.152357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.152390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.152544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.152572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.152664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.152690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.152798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.152824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.152911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.152937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.153016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.153043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.153185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.153225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.153346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.153373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.153495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.153522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.153613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.153640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.153724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.153749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.153843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.959 [2024-12-05 14:02:02.153871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.959 qpair failed and we were unable to recover it. 00:30:30.959 [2024-12-05 14:02:02.153957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.153982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.154055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.154080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.154203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.154231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.154322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.154349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.154451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.154481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.154576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.154604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.154717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.154743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.154827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.154854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.154946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.154974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.155115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.155140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.155224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.155251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.155359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.155386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.155511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.155539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.155619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.155645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.155749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.155775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.155886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.155916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.156029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.156056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.156168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.156195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.156337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.156363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.156451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.156480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.156577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.156603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.156702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.156728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.156805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.156831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.156916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.156942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.157054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.157080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.157166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.157194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.157282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.157307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.157401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.157438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.960 [2024-12-05 14:02:02.157528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.960 [2024-12-05 14:02:02.157554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.960 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.157648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.157674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.157756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.157781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.157888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.157914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.158010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.158036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.158123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.158149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.158260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.158286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.158396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.158436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.158529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.158556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.158672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.158698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.158785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.158811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.158893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.158919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.158999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.159025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.159127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.159167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.159308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.159335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.159441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.159470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.159587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.159614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.159699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.159724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.159861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.159886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.159998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.160024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.160150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.160180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.160296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.160322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.160401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.160435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.160548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.160574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.160654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.160680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.160765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.160793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.160882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.160908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.160997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.161027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.161120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.161148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.161241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.161269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.161360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.161390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.161493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.161520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.161636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.161662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.161772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.161799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.161894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.161921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.961 qpair failed and we were unable to recover it. 00:30:30.961 [2024-12-05 14:02:02.162010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.961 [2024-12-05 14:02:02.162036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.162183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.162209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.162289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.162315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.162395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.162429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.162543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.162569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.162676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.162702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.162854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.162880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.162996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.163022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.163140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.163168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.163301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.163330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.163458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.163498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.163600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.163626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.163711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.163738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.163854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.163881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.163990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.164016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.164131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.164156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.164259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.164285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.164371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.164399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.164495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.164522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.164635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.164666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.164776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.164802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.164921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.164946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.165038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.165066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.165182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.165208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.165299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.165332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.165449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.165476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.165564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.165590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.165699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.165725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.165809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.165835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.165914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.165939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.166057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.166083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.166175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.166200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.166313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.166342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.962 [2024-12-05 14:02:02.166444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.962 [2024-12-05 14:02:02.166472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.962 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.166557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.166585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.166707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.166733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.166822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.166848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.166984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.167009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.167147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.167173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.167287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.167313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.167406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.167454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.167548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.167577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.167690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.167716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.167796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.167822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.167944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.167972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.168068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.168096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.168210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.168236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.168322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.168349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.168466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.168493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.168607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.168632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.168716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.168742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.168844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.168870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.168961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.168986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.169105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.169145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.169262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.169290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.169407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.169445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.169580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.169606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.169687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.169713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.169808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.169835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.169952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.169983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.170073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.170103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.170192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.170220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.170308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.170334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.170426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.170454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.170535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.170561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.170648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.170673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.170763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.170790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.170879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.170904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.963 qpair failed and we were unable to recover it. 00:30:30.963 [2024-12-05 14:02:02.170978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.963 [2024-12-05 14:02:02.171003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.171081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.171108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.171231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.171260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.171350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.171378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.171479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.171507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.171623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.171649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.171731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.171757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.171846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.171872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.171964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.171990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.172097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.172123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.172203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.172230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.172325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.172356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.172459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.172489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.172576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.172602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.172688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.172715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.172803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.172831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.172916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.172942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.173050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.173076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.173156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.173187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.173302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.173330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.173406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.173438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.173523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.173549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.173638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.173666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.173756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.173782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.173870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.173896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.173985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.174011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.174102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.174129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.174248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.174274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.174358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.174385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.174546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.174585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.174688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.174715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.174798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.174824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.174952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.174979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.175097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.964 [2024-12-05 14:02:02.175123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.964 qpair failed and we were unable to recover it. 00:30:30.964 [2024-12-05 14:02:02.175201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.175227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.175340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.175367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.175460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.175487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.175575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.175602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.175695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.175722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.175799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.175824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.175917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.175942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.176056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.176081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.176167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.176193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.176307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.176332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.176427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.176455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.176550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.176581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.176695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.176722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.176811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.176837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.176949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.176976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.177068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.177097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.177188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.177216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.177304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.177331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.177453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.177481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.177569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.177595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.177683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.177708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.177816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.177842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.177941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.177968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.178061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.178089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.178206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.178240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.178322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.178348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.178435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.178462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.178552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.178579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.965 [2024-12-05 14:02:02.178665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.965 [2024-12-05 14:02:02.178693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.965 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.178779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.178804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.178917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.178942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.179056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.179083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.179159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.179186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.179293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.179332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.179456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.179485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.179577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.179605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.179735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.179762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.179852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.179878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.179993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.180019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.180101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.180127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.180255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.180295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.180394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.180429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.180574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.180602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.180722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.180749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.180860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.180887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.180964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.180990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.181089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.181116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.181252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.181281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.181374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.181403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.181506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.181533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.181646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.181672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.181800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.181827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.181942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.181969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.182052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.182080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.182185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.182224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.182315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.182343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.182435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.182463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.182554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.182582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.182673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.182699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.182788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.182816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.182931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.966 [2024-12-05 14:02:02.182958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.966 qpair failed and we were unable to recover it. 00:30:30.966 [2024-12-05 14:02:02.183101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.183129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.183244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.183272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.183412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.183454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.183570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.183606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.183711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.183738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.183869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.183896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.183997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.184024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.184120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.184147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.184236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.184262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.184347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.184376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.184502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.184529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.184638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.184664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.184770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.184795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.184897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.184924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.185008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.185037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.185211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.185256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.185343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.185372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.185469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.185496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.185582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.185607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.185691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.185718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.185841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.185867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.185978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.186004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.186083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.186109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.186262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.186301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.186423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.186449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.186565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.186593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.186679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.186706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.186831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.186874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.186987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.187014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.187158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.187185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.187285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.187330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.187466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.187494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.187585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.187611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.187695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.187720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.187835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.967 [2024-12-05 14:02:02.187861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.967 qpair failed and we were unable to recover it. 00:30:30.967 [2024-12-05 14:02:02.187968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.187994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.188076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.188103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.188187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.188216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.188339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.188367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.188483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.188510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.188632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.188658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.188776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.188802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.188914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.188940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.189021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.189047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.189156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.189196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.189284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.189312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.189432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.189460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.189554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.189582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.189711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.189739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.189853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.189881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.189997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.190024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.190192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.190222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.190357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.190384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.190506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.190535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.190617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.190644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.190750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.190779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.190952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.190995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.191121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.191151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.191285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.191312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.191430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.191470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.191610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.191639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.191737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.191764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.191845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.191872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.192019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.192049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.192193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.192223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.192367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.192395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.192498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.192525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.192606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.192632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.192788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.192815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.192917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.192946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.968 qpair failed and we were unable to recover it. 00:30:30.968 [2024-12-05 14:02:02.193065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.968 [2024-12-05 14:02:02.193094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.193192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.193234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.193376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.193402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.193517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.193542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.193622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.193649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.193749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.193779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.193900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.193929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.194052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.194081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.194235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.194283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.194399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.194435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.194525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.194551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.194704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.194733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.194876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.194904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.195039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.195066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.195181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.195208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.195315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.195340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.195435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.195463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.195546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.195572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.195672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.195699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.195786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.195812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.195971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.196002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.196118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.196160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.196322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.196347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.196444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.196470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.196545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.196569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.196685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.196711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.196885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.196925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.197045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.197077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.197199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.197228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.197324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.197352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.197451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.197477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.197618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.197643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.197744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.197771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.197863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.197891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.197997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.198021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.969 [2024-12-05 14:02:02.198172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.969 [2024-12-05 14:02:02.198202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.969 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.198328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.198353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.198436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.198465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.198558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.198586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.198714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.198744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.198836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.198866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.199001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.199031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.199132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.199163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.199348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.199376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.199518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.199546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.199631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.199657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.199769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.199795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.199920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.199964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.200059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.200087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.200216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.200242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.200352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.200378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.200485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.200524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.200620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.200648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.200735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.200762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.200874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.200907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.200994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.201022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.201142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.201170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.201291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.201318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.201394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.201426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.201506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.201532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.201650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.201675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.201787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.201813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.201899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.201925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.202002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.202027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.970 qpair failed and we were unable to recover it. 00:30:30.970 [2024-12-05 14:02:02.202124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.970 [2024-12-05 14:02:02.202149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.202288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.202314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.202436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.202462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.202582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.202609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.202699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.202725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.202815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.202841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.202955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.202982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.203067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.203094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.203185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.203224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.203338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.203367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.203456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.203484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.203579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.203607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.203718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.203745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.203870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.203896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.203993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.204023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.204154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.204180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.204261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.204288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.204373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.204400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.204573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.204612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.204708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.204735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.204842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.204868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.204982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.205007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.205086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.205111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.205186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.205211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.205296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.205321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.205452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.205492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.205577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.205605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.205750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.205795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.205926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.205973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.206084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.206112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.206254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.206286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.206400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.206434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.206549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.206574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.206658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.971 [2024-12-05 14:02:02.206682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.971 qpair failed and we were unable to recover it. 00:30:30.971 [2024-12-05 14:02:02.206785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.206810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.206949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.206993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.207100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.207129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.207250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.207278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.207401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.207453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.207562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.207587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.207674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.207699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.207815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.207840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.207921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.207949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.208086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.208135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.208235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.208263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.208371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.208398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.208511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.208538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.208664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.208703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.208827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.208853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.208990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.209015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.209148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.209172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.209279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.209305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.209393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.209422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.209509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.209533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.209616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.209641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.209743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.209771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.209898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.209926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.210060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.210089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.210270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.210330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.210460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.210488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.210580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.210607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.210696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.210722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.210814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.210840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.210931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.210958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.211072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.211098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.211187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.211213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.972 [2024-12-05 14:02:02.211311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.972 [2024-12-05 14:02:02.211342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.972 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.211461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.211489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.211605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.211630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.211711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.211736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.211809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.211834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.211928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.211954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.212114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.212161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.212245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.212272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.212402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.212449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.212547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.212573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.212682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.212707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.212837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.212866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.212992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.213021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.213108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.213136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.213268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.213296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.213431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.213470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.213596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.213625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.213706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.213733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.213869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.213919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.214040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.214085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.214226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.214257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.214435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.214462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.214602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.214628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.214825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.214870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.214982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.215028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.215196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.215225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.215317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.215348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.215508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.215533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.215619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.215644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.215760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.215790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.215900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.215941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.216043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.216074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.216217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.216247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.216430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.973 [2024-12-05 14:02:02.216470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.973 qpair failed and we were unable to recover it. 00:30:30.973 [2024-12-05 14:02:02.216567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.216596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.216681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.216707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.216860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.216891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.217008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.217039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.217173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.217199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.217289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.217317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.217403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.217436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.217548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.217575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.217667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.217693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.217806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.217832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.217950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.217975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.218101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.218129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.218250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.218275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.218361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.218386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.218485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.218511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.218595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.218620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.218736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.218761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.218880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.218906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.219078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.219123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.219233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.219260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.219351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.219377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.219486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.219526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.219699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.219746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.219883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.219932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.220067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.220117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.220227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.220253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.220341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.220368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.220462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.220490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.220607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.220633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.220715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.220743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.220836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.220863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.220944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.220971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.221091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.221120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.221237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.221265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.974 [2024-12-05 14:02:02.221429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.974 [2024-12-05 14:02:02.221469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.974 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.221561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.221588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.221701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.221727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.221866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.221892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.222037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.222082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.222160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.222186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.222272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.222297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.222382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.222408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.222507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.222533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.222624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.222649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.222759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.222784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.222869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.222896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.223007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.223034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.223121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.223148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.223263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.223290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.223410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.223445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.223575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.223613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.223719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.223758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.223879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.223908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.224023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.224049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.224135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.224162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.224257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.224284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.224413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.224445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.224558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.224584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.224673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.224699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.224805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.224852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.224960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.224985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.225070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.225098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.225182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.975 [2024-12-05 14:02:02.225208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.975 qpair failed and we were unable to recover it. 00:30:30.975 [2024-12-05 14:02:02.225337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.225376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.225508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.225542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.225661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.225686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.225773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.225799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.225900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.225934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.226058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.226090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.226224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.226249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.226360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.226387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.226490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.226517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.226600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.226627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.226748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.226774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.226860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.226887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.226971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.226997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.227079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.227106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.227236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.227275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.227385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.227413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.227511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.227539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.227644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.227670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.227778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.227810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.227925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.227950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.228131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.228158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.228273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.228300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.228408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.228456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.228547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.228575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.228716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.228764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.228932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.228982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.229091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.229123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.229229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.229260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.229388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.229428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.229521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.229547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.229629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.229655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.229793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.229836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.229999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.230044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.976 [2024-12-05 14:02:02.230152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.976 [2024-12-05 14:02:02.230178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.976 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.230265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.230290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.230412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.230452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.230532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.230558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.230643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.230669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.230801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.230847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.231009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.231060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.231150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.231176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.231269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.231296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.231424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.231451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.231541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.231567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.231697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.231730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.231856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.231890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.231994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.232020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.232131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.232157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.232241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.232268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.232408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.232442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.232520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.232545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.232634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.232659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.232781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.232806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.232884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.232909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.233037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.233076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.233204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.233231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.233358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.233397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.233503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.233529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.233650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.233676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.233783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.233808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.233948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.233974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.234063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.234090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.234233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.234258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.234348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.234374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.234458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.234485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.234597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.234622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.234770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.234801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.234889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.234917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.977 qpair failed and we were unable to recover it. 00:30:30.977 [2024-12-05 14:02:02.235016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.977 [2024-12-05 14:02:02.235061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.235184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.235211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.235317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.235342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.235424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.235449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.235563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.235590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.235676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.235701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.235781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.235806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.235921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.235949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.236027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.236052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.236138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.236165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.236250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.236276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.236378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.236425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.236518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.236546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.236680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.236713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.236898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.236945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.237078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.237111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.237266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.237293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.237408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.237441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.237525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.237552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.237635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.237661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.237827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.237859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.238009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.238057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.238178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.238206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.238296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.238324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.238444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.238471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.238555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.238582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.238743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.238790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.238905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.238939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.239044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.239077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.239198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.239224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.239401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.239434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.239575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.239600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.239735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.239760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.239870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.978 [2024-12-05 14:02:02.239895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.978 qpair failed and we were unable to recover it. 00:30:30.978 [2024-12-05 14:02:02.239981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.240007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.240167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.240211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.240329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.240356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.240471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.240497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.240606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.240631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.240718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.240762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.240860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.240898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.241021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.241047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.241194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.241225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.241407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.241525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.241668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.241710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.241871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.241903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.242008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.242040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.242201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.242233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.242348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.242374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.242509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.242535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.242654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.242680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.242767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.242793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.242948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.242982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.243137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.243170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.243353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.243392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.243522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.243550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.243668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.243694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.243810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.243836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.243955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.243981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.244063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.244090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.244226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.244252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.244367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.244393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.244511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.244551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.244648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.244676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.244796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.244821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.244908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.244935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.245015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.245040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.245170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.245209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.979 [2024-12-05 14:02:02.245329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.979 [2024-12-05 14:02:02.245356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.979 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.245483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.245513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.245604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.245631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.245719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.245745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.245828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.245856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.245942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.245969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.246085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.246112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.246204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.246232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.246323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.246350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.246490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.246529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.246652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.246680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.246829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.246875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.247015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.247053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.247161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.247186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.247324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.247350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.247464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.247490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.247577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.247603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.247712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.247744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.247899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.247933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.248079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.248112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.248266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.248305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.248428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.248457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.248581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.248609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.248726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.248752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.248855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.248903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.248997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.249023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.249164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.249190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.249274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.249300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.249377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.249403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.249501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.980 [2024-12-05 14:02:02.249528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.980 qpair failed and we were unable to recover it. 00:30:30.980 [2024-12-05 14:02:02.249609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.249636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.249731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.249758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.249876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.249903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.250002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.250028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.250107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.250133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.250214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.250239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.250333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.250358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.250451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.250490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.250613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.250642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.250756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.250788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.250875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.250901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.251041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.251067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.251154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.251181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.251276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.251302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.251413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.251444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.251531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.251558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.251671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.251697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.251795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.251821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.251942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.251968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.252062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.252089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.252194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.252233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.252341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.252381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.252526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.252555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.252680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.252707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.252786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.252812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.252905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.252933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.253050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.253078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.253193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.253219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.253313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.253339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.253428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.253455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.253540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.253567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.253655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.253682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.253801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.253828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.253917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.253943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.981 [2024-12-05 14:02:02.254029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.981 [2024-12-05 14:02:02.254055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.981 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.254170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.254197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.254334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.254375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.254478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.254518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.254617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.254656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.254747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.254775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.254891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.254917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.255000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.255028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.255145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.255171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.255256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.255286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.255379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.255409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.255564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.255591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.255685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.255732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.255864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.255890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.256072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.256105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.256252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.256284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.256434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.256464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.256558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.256586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.256701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.256728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.256806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.256853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.256978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.257021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.257130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.257167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.257284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.257319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.257462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.257490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.257573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.257599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.257736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.257766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.257901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.257936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.258041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.258074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.258223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.258256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.258393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.258440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.258575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.258614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.258753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.258792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.258891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.258918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.259053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.259100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.259207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.259233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.259349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.259381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.982 [2024-12-05 14:02:02.259520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.982 [2024-12-05 14:02:02.259551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.982 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.259678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.259707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.259793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.259820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.259987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.260030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.260216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.260252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.260413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.260458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.260581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.260614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.260728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.260754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.260837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.260863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.260962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.260996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.261096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.261123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.261205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.261231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.261326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.261354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.261489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.261518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.261608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.261636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.261801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.261828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.262081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.262115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.262222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.262256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.262403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.262437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.262547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.262573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.262691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.262717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.262858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.262901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.262996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.263030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.263161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.263188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.263298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.263325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.263427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.263454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.263541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.263567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.263648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.263675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.263756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.263782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.263903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.263931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.264016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.264043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.264155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.264182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.264264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.264290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.264379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.264406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.264511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.264550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.264674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.264701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.264790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.264817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.983 qpair failed and we were unable to recover it. 00:30:30.983 [2024-12-05 14:02:02.264936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.983 [2024-12-05 14:02:02.264972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.265070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.265114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.265239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.265265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.265344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.265371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.265464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.265493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.265629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.265656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.265802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.265836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.265939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.265975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.266120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.266154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.266310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.266389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.266538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.266565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.266646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.266672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.266780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.266815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.267019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.267056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.267181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.267215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.267356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.267382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.267506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.267533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.267613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.267640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.267763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.267797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.268003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.268037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.268183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.268225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.268347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.268373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.268501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.268528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.268653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.268679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.268793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.268842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.272538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.272578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.272688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.272715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.272835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.272862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.272980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.273014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.273152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.273186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.273388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.273450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.273540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.273566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.273661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.273688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.273779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.273804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.273914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.273940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.274033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.984 [2024-12-05 14:02:02.274059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.984 qpair failed and we were unable to recover it. 00:30:30.984 [2024-12-05 14:02:02.274190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.274228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.274326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.274354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.274497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.274534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.274659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.274694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.274814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.274850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.275023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.275059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.275232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.275267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.275381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.275427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.275579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.275614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.275763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.275798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.275918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.275953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.276057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.276092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.276263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.276298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.276412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.276464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.276584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.276620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.276727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.276762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.276934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.276969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.277117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.277154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.277307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.277346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.277459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.277496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.277606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.277642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.277796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.277832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.277982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.278018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.278164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.278199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.278342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.278440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.278593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.278629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.278746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.278782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.278932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.278969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.279139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.279175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.279363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.279445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.279615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.279650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.279779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.279816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.279936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.279972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.280121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.280159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.985 [2024-12-05 14:02:02.280313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.985 [2024-12-05 14:02:02.280351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.985 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.280493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.280531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.280646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.280683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.280806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.280844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.280998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.281035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.281173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.281210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.281373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.281410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.281553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.281590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.281712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.281749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.281922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.281959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.282078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.282115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.282244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.282283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.282409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.282456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.282576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.282613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.282744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.282781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.282931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.282971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.283093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.283131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.283290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.283327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.283437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.283476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.283600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.283682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.283848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.283926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.284103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.284168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.284303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.284373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.284606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.284671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.284890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.284954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.285165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.285232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.285373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.285413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.285560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.285597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.285721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.285757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.285935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.285972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.286145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.286182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.286299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.286335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.286487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.286542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.286721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.286762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.286927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.286965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.287161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.287199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.287348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.287453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.287583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.287622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.287783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.287822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.986 [2024-12-05 14:02:02.287972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.986 [2024-12-05 14:02:02.288009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.986 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.288155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.288191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.288370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.288445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.288570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.288607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.288762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.288799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.288946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.288985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.289101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.289139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.289279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.289316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.289462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.289519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.289649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.289692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.289853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.289893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.290077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.290115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.290240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.290278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.290424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.290464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.290621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.290661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.290820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.290859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.290997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.291038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.291163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.291205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.291344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.291383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.291532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.291570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.291695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.291747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.291933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.291971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.292129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.292166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.292322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.292360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.292542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.292581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.292712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.292749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.292891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.292928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.293043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.293080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.293225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.293261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.293428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.293482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.293610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.293646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.293799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.293835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.293984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.294019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.294146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.294182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.294313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.294348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.294498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.294535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.294687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.294722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.294862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.294898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.295056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.987 [2024-12-05 14:02:02.295091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.987 qpair failed and we were unable to recover it. 00:30:30.987 [2024-12-05 14:02:02.295248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.295284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.295446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.295483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.295626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.295662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.295763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.295799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.295929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.295964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.296093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.296131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.296288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.296323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.296491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.296525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.296689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.296740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.296894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.296929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.297073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.297107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.297251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.297285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.297406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.297449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.297570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.297603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.297704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.297735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.297880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.297912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.298023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.298055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.298166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.298198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.298316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.298348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.298471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.298505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.298639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.298671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.298803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.298835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.299011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.299043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.299193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.299227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.299334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.299366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.299494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.299528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.299673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.299705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.299853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.299885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.300025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.300056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.300191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.300223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.300330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.300362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.300517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.300548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.300679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.300710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.300851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.300881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.300983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.301014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.301153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.301191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.301326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.301357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.301490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.301521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.301627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.301661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.988 [2024-12-05 14:02:02.301829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.988 [2024-12-05 14:02:02.301868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.988 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.301966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.301997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.302106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.302138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.302250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.302281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.302423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.302455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.302596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.302628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.302773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.302804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.302935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.302966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.303105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.303139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.303302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.303338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.303472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.303502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.303606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.303637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.303769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.303799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.303903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.303933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.304062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.304092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.304251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.304280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.304426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.304456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.304585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.304614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.304741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.304771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.304868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.304897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.304994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.305023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.305139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.305168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.305289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.305318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.305456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.305490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.305595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.305625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.305786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.305816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.305943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.305973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.306101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.306132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.306234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.306264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.306366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.306396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.306532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.306570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.306685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.306722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.306820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.306851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.306985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.307015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.307152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.307182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.307277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.307308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.989 [2024-12-05 14:02:02.307478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.989 [2024-12-05 14:02:02.307508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.989 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.307630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.307659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.307802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.307831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.307941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.307970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.308095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.308124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.308290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.308333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.308460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.308492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.308591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.308620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.308750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.308778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.308880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.308908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.309035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.309064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.309172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.309200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.309320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.309348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.309443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.309479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.309613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.309642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.309734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.309762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.309851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.309879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.309976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.310006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.310121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.310149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.310256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.310284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.310404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.310443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.990 [2024-12-05 14:02:02.310573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.990 [2024-12-05 14:02:02.310601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.990 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.310689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.310718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.310815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.310844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.310930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.310958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.311086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.311115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.311241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.311270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.311375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.311403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.311547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.311574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.311669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.311696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.311855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.311881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.311967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.311994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.312127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.312157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.312281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.312309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.312447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.312476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.312570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.312599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.312702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.312736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.312882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.312910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.313039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.313073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.313174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.313201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.313335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.313363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.313469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.313514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.313605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.313631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:30.991 qpair failed and we were unable to recover it. 00:30:30.991 [2024-12-05 14:02:02.313749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.991 [2024-12-05 14:02:02.313775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.255 qpair failed and we were unable to recover it. 00:30:31.255 [2024-12-05 14:02:02.674844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.255 [2024-12-05 14:02:02.674913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.255 qpair failed and we were unable to recover it. 00:30:31.255 [2024-12-05 14:02:02.675069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.255 [2024-12-05 14:02:02.675106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.255 qpair failed and we were unable to recover it. 00:30:31.255 [2024-12-05 14:02:02.675276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.255 [2024-12-05 14:02:02.675312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.255 qpair failed and we were unable to recover it. 00:30:31.255 [2024-12-05 14:02:02.675470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.255 [2024-12-05 14:02:02.675496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.255 qpair failed and we were unable to recover it. 00:30:31.255 [2024-12-05 14:02:02.675615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.255 [2024-12-05 14:02:02.675640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.255 qpair failed and we were unable to recover it. 00:30:31.255 [2024-12-05 14:02:02.675799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.255 [2024-12-05 14:02:02.675836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.255 qpair failed and we were unable to recover it. 00:30:31.255 [2024-12-05 14:02:02.676000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.255 [2024-12-05 14:02:02.676035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.255 qpair failed and we were unable to recover it. 00:30:31.255 [2024-12-05 14:02:02.676166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.255 [2024-12-05 14:02:02.676203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.255 qpair failed and we were unable to recover it. 00:30:31.255 [2024-12-05 14:02:02.676358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.255 [2024-12-05 14:02:02.676393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.255 qpair failed and we were unable to recover it. 00:30:31.255 [2024-12-05 14:02:02.676533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.676563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.676684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.676728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.676874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.676909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.677045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.677079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.677202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.677237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.677376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.677411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.677552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.677576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.677691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.677738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.677887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.677924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.678073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.678109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.678230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.678267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.678438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.678488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.678594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.678618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.678738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.678773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.678943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.678982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.679223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.679314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.679466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.679492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.679610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.679637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.679725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.679750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.679873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.679899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.680009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.680035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.680151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.680177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.680319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.680355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.680520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.680546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.680643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.680668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.680776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.680802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.680893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.680918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.681089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.681137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.681290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.681337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.681496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.681522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.681641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.681667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.681759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.681786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.681931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.681984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.682180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.682237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.682509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.682536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.682635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.682660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.682807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.682861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.256 [2024-12-05 14:02:02.683032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.256 [2024-12-05 14:02:02.683086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.256 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.683358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.683435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.683565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.683590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.683678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.683732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.683931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.683982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.684190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.684244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.684473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.684500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.684608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.684634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.684780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.684831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.685031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.685057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.685142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.685167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.685256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.685283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.685499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.685551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.685762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.685813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.686008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.686059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.686231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.686282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.686446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.686499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.686761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.686813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.687019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.687069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.687240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.687294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.687491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.687545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.687747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.687799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.687977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.688027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.688180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.688231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.688450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.688503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.688664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.688715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.688926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.688978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.689132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.689184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.689377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.689439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.689599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.689650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.689850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.689901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.690064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.690115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.690335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.690401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.690603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.690656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.690869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.690920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.691135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.691187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.691358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.691461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.691667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.691720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.257 [2024-12-05 14:02:02.691917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-12-05 14:02:02.691971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.257 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.692160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.692213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.692466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.692518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.692700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.692751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.692966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.693017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.693205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.693265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.693450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.693503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.693725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.693779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.693998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.694053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.694262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.694318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.694542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.694568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.694830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.694885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.695106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.695161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.695320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.695375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.695685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.695771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.695966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.696024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.696215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.696275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.696471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.696532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.696757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.696814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.697065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.697120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.697340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.697404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.697646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.697702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.697927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.697982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.698192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.698249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.698462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.698518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.698737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.698793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.699004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.699059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.699310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.699365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.699580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.699636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.699811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.699869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.700114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.700169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.700343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.700401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.700648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.700706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.700944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.700999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.701214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.701279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.701513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.701571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.258 [2024-12-05 14:02:02.701781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.258 [2024-12-05 14:02:02.701838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.258 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.702071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.702127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.702431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.702506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.702743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.702798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.703053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.703108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.703363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.703457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.703731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.703790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.704009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.704068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.704332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.704393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.704680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.704743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.704959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.705014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.705232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.705288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.705468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.705553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.705758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.705815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.706025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.706083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.706268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.706325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.706564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.706619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.706807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.706862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.707085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.707139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.707389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.707474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.707675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.707729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.707926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.707981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.708205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.708260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.708520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.708575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.708765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.708824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.709012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.709071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.709295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.709354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.709648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.709708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.709947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.710006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.710199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.710257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.710526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.710588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.710856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.710916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.711097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.711156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.711345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.711405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.711637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.711700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.711901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.711961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.712157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.712216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.259 qpair failed and we were unable to recover it. 00:30:31.259 [2024-12-05 14:02:02.712446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.259 [2024-12-05 14:02:02.712507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.712730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.712790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.713034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.713093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.713305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.713363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.713607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.713666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.713853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.713912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.714138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.714198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.714462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.714522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.714759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.714818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.715007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.715071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.715308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.715366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.715648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.715709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.715984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.716056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.716256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.716315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.716593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.716653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.716930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.716990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.717215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.717274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.717497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.717558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.717805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.717865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.718053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.718111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.718361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.718440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.718733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.718793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.719000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.719059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.719342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.719406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.719702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.719762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.719991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.720052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.720288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.720349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.720587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.720653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.720914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.720977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.721272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.721337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.721664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.721729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.721934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.260 [2024-12-05 14:02:02.722014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.260 qpair failed and we were unable to recover it. 00:30:31.260 [2024-12-05 14:02:02.722331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.722394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.722665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.722729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.722992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.723055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.723336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.723400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.723709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.723774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.723972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.724052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.724261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.724328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.724644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.724710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.724928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.724991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.725246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.725309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.725604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.725671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.725952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.726017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.726325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.726389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.726688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.726751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.726988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.727055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.727296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.727363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.727711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.727776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.728033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.728097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.728376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.728489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.728722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.728782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.729034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.729111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.729407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.729489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.729719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.729783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.729986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.730052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.730306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.730370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.730669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.730733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.730978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.731045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.731272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.731336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.731644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.731712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.732004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.732067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.732287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.732352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.732627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.732693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.732893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.732957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.733147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.733211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.733485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.733551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.261 [2024-12-05 14:02:02.733737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.261 [2024-12-05 14:02:02.733803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.261 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.734061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.734124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.734433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.734497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.734713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.734776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.734977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.735044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.735292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.735355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.735634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.735701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.735905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.735969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.736160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.736223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.736471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.736536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.736784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.736847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.737134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.737197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.737462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.737527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.737757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.737820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.738106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.738169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.738379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.738457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.738745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.738810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.739060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.739125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.739370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.739465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.739758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.739822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.740030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.740093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.740312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.740380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.740685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.740748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.740953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.741017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.741208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.741272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.741524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.741602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.741894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.741958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.742199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.742264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.742470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.742535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.742782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.742846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.743085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.743149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.743399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.743478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.743727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.743791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.743983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.744047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.744336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.744400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.744622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.262 [2024-12-05 14:02:02.744686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.262 qpair failed and we were unable to recover it. 00:30:31.262 [2024-12-05 14:02:02.744931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.744994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.745193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.745256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.745511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.745577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.745880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.745943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.746171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.746235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.746499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.746565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.746778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.746841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.747100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.747164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.747455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.747521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.747730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.747796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.748002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.748067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.748321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.748385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.748662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.748727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.748969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.749032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.749277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.749344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.749607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.749675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.749977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.750041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.750278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.750341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.750657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.750724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.751017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.751080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.751377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.751459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.751679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.751744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.752038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.752101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.752347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.752414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.752710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.752777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.753019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.753084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.753296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.753358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.753580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.753646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.753887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.753950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.754186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.754260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.754517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.754584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.754795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.754858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.755102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.755166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.755346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.755410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.755677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.755740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.755975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-12-05 14:02:02.756040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.263 qpair failed and we were unable to recover it. 00:30:31.263 [2024-12-05 14:02:02.756285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.756350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.756654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.756719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.756967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.757034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.757302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.757367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.757622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.757689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.757951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.758015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.758274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.758340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.758634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.758700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.758946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.759010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.759304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.759368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.759628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.759696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.759926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.759990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.760239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.760303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.760595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.760661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.760923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.760986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.761240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.761305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.761567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.761634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.761913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.761987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.762187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.762254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.762500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.762567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.762788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.762853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.763040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.763104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.763346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.763412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.763651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.763717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.763974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.764039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.764277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.764341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.764638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.764706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.764928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.764993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.765210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.765278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.765542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.765608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.765864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.765930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.766155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.766221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.766462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.766527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.766777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.766861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.767090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.767154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.767400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.767492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.767741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.767807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.264 [2024-12-05 14:02:02.768060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-12-05 14:02:02.768126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.264 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.768345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.768409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.768694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.768758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.769018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.769084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.769297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.769364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.769592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.769657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.769935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.769999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.770250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.770314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.770553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.770617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.770883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.770947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.771253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.771318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.771592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.771659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.771898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.771962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.772223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.772287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.772539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.772605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.772865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.772929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.773183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.773248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.773551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.773616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.773862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.773926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.774181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.774259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.774553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.774620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.774930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.774995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.775189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.775254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.265 [2024-12-05 14:02:02.775493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.265 [2024-12-05 14:02:02.775569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.265 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.775764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.775829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.776066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.776131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.776385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.776488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.776724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.776798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.777039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.777103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.777394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.777482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.777776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.777840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.778124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.778187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.778501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.778568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.778862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.778925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.779209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.779273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.779562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.779628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.779925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.779989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.780265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.780329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.780607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.780673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.780971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.781035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.781283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.781350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.781589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.543 [2024-12-05 14:02:02.781656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.543 qpair failed and we were unable to recover it. 00:30:31.543 [2024-12-05 14:02:02.781972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.782037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.782298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.782361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.782644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.782709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.782966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.783030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.783270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.783334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.783633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.783699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.783954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.784021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.784231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.784294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.784556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.784621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.784879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.784943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.785209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.785275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.785564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.785630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.785889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.785950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.786166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.786233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.786541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.786606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.786909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.786973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.787230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.787294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.787543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.787608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.787917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.787981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.788238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.788302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.788558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.788622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.788835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.788910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.789198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.789263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.789543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.789608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.789840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.789905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.790163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.790229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.790507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.790572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.790824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.790888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.791184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.791249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.791539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.791603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.791859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.791922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.792123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.792187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.792489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.792555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.792812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.792875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.793121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.793187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.793491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.793558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.793748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.793811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.544 [2024-12-05 14:02:02.794098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.544 [2024-12-05 14:02:02.794162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.544 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.794449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.794514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.794805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.794868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.795177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.795241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.795460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.795527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.795765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.795834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.796103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.796169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.796363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.796447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.796699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.796763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.797011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.797075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.797320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.797383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.797639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.797705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.797957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.798025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.798265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.798329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.798623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.798688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.798983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.799047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.799282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.799345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.799612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.799677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.799877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.799942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.800140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.800204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.800471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.800537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.800838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.800903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.801161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.801227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.801539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.801603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.801919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.801994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.802214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.802281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.802583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.802648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.802900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.802967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.803224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.803289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.803536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.803601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.803888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.803952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.804240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.804304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.804569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.804636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.804841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.804907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.805123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.805189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.805434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.805501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.805761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.805826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.806116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.806181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.545 [2024-12-05 14:02:02.806502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.545 [2024-12-05 14:02:02.806567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.545 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.806819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.806883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.807126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.807189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.807477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.807542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.807842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.807907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.808167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.808229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.808489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.808555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.808856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.808920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.809215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.809278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.809528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.809594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.809903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.809967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.810268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.810331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.810645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.810710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.810988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.811055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.811346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.811410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.811673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.811742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.812001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.812065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.812317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.812387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.812801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.812868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.813093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.813157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.813382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.813465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.813732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.813800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.814104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.814169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.814485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.814521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.814639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.814673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.814806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.814840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.814954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.814995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.815112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.815146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.815292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.815326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.815467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.815502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.815619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.815654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.815829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.815863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.816000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.816035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.816174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.816208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.816343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.816378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.816505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.816539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.817721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.817802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.818890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.818923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.546 [2024-12-05 14:02:02.819081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.546 [2024-12-05 14:02:02.819109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.546 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.819202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.819230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.819322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.819351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.819454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.819483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.819585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.819613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.819725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.819759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.819879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.819907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.820027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.820055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.820171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.820199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.820347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.820375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.820529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.820582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.820756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.820801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.820906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.820938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.821076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.821105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.821228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.821257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.821352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.821380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.821520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.821555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.821662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.821697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.821813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.821857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.822034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.822099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.822271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.822300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.822468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.822503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.822663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.822710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.822826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.822859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.822972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.823001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.823114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.823142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.823290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.823317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.823434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.823462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.823580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.823613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.823696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.823723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.823857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.823885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.824003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.824032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.824158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.824185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.824271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.824298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.824449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.824478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.824588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.824615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.824711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.824740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.547 [2024-12-05 14:02:02.824823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.547 [2024-12-05 14:02:02.824851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.547 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.824934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.824961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.825085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.825112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.825200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.825227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.825324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.825351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.825452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.825480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.825568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.825595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.825697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.825724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.825841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.825869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.825951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.825978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.826108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.826157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.826263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.826294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.826395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.826430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.826516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.826544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.826644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.826672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.826775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.826814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.826937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.826965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.827077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.827106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.827206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.827234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.827325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.827353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.827485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.827514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.827600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.827627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.827807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.827841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.828017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.828052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.828194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.828229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.828464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.828495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.828609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.828656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.828826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.828872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.828959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.828988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.829127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.829175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.829339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.829366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.829467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.829499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.829611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.829661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.548 qpair failed and we were unable to recover it. 00:30:31.548 [2024-12-05 14:02:02.829799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.548 [2024-12-05 14:02:02.829849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.829966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.829993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.830149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.830176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.830320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.830347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.830438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.830466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.830553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.830581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.830726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.830753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.830856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.830884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.830975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.831002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.831126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.831154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.831265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.831293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.831432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.831459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.831580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.831608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.831690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.831717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.831855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.831883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.832011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.832038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.832156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.832184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.832292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.832319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.832436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.832464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.832560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.832587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.832664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.832691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.832862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.832890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.832981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.833009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.833098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.833124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.833294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.833321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.833449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.833478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.833580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.833608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.833731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.833758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.833896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.833924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.834053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.834080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.834167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.834194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.834285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.834312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.834437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.834466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.834601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.834647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.834752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.834780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.834929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.834956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.835042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.835069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.835170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.549 [2024-12-05 14:02:02.835198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.549 qpair failed and we were unable to recover it. 00:30:31.549 [2024-12-05 14:02:02.835321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.835352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.835476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.835504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.835585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.835613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.835723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.835751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.835873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.835901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.836053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.836080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.836202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.836230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.836332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.836361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.836529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.836557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.836669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.836704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.836852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.836879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.836972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.837003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.837143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.837170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.837254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.837281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.837401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.837434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.837564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.837592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.837690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.837717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.837810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.837837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.837929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.837956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.838038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.838065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.838161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.838204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.838335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.838365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.838501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.838532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.838677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.838711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.838842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.838877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.839026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.839060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.839264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.839292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.839436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.839466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.839584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.839614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.839770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.839804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.839918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.839953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.840100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.550 [2024-12-05 14:02:02.840152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.550 qpair failed and we were unable to recover it. 00:30:31.550 [2024-12-05 14:02:02.840305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.840334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.840460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.840490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.840588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.840616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.840731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.840782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.840934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.840981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.841066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.841094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.841215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.841242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.841360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.841387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.841508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.841559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.841656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.841683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.841801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.841829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.841913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.841940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.842065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.842092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.842240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.842268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.842348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.842375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.842471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.842499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.842622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.842650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.842791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.842837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.842942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.842970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.843117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.843145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.843224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.843251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.843400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.843439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.843586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.843622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.843763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.843798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.843929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.843964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.844105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.844145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.844261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.844307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.844432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.844461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.844544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.844572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.844712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.844746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.844924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.844959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.845107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.845141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.845286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.845321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.845456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.845487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.845609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.845637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.845788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.845822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.846014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.846049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.551 [2024-12-05 14:02:02.846179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.551 [2024-12-05 14:02:02.846223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.551 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.846365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.846401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.846590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.846619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.846702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.846730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.846819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.846866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.847015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.847052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.847198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.847233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.847373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.847407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.847540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.847568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.847658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.847686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.847868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.847903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.848095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.848139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.848309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.848344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.848507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.848536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.848633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.848682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.848801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.848837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.848974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.849009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.849141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.849175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.849287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.849323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.849471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.849500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.849651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.849685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.849843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.849876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.850011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.850048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.850221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.850291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.850435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.850466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.850580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.850629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.850813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.850859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.850966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.851014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.851163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.851191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.851295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.851330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.851487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.851515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.851610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.851638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.851764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.851791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.851939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.552 [2024-12-05 14:02:02.851967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.552 qpair failed and we were unable to recover it. 00:30:31.552 [2024-12-05 14:02:02.852072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.852100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.852220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.852247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.852362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.852390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.852509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.852537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.852666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.852696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.852788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.852816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.852939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.852968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.853123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.853151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.853237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.853266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.853430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.853460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.853574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.853619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.853714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.853742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.853919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.853953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.854119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.854154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.854300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.854340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.854458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.854505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.854645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.854679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.854855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.854896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.854997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.855031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.855163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.855197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.855335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.855370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.855517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.855546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.855668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.855703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.855910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.855957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.856114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.856161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.856254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.856283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.856426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.856455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.856594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.856642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.856757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.856784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.856888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.856917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.857047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.857075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.857162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.857190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.857304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.857332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.857431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.857459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.857601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.553 [2024-12-05 14:02:02.857628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.553 qpair failed and we were unable to recover it. 00:30:31.553 [2024-12-05 14:02:02.857785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.857812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.857953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.858001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.858121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.858150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.858274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.858301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.858442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.858471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.858608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.858654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.858747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.858775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.858902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.858930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.859026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.859053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.859142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.859170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.859291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.859319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.859474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.859502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.859593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.859621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.859710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.859737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.859866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.859893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.859992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.860020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.860130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.860157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.860242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.860270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.860423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.860451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.860532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.860559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.860670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.860698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.860848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.860876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.861000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.861031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.861152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.861180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.861334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.861362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.861457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.861485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.861570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.861598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.861713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.861742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.861889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.861916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.862000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.862029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.862113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.862142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.862243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.862270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.862394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.862436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.862536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.862563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.862651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.862680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.862761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.862789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.862922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.862949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.863032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.554 [2024-12-05 14:02:02.863059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.554 qpair failed and we were unable to recover it. 00:30:31.554 [2024-12-05 14:02:02.863188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.863231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.863386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.863433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.863562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.863591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.863710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.863743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.863869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.863897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.863995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.864024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.864113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.864144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.864269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.864297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.864392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.864436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.864551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.864586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.864755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.864800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.864927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.864974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.865094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.865122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.865243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.865270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.865380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.865408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.865529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.865557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.865644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.865671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.865764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.865792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.865905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.865933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.866021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.866048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.866192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.866220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.866346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.866374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.866468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.866497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.866649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.866677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.866793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.866825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.866909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.866937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.867032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.867061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.867190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.867218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.867308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.867335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.867455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.867484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.867622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.867664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.867773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.867803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.867916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.867946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.868031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.868060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.868171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.868198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.868346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.555 [2024-12-05 14:02:02.868390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.555 qpair failed and we were unable to recover it. 00:30:31.555 [2024-12-05 14:02:02.868571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.868619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.868709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.868736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.868875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.868926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.869012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.869039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.869160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.869187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.869287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.869315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.869462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.869490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.869607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.869636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.869765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.869795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.869913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.869941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.870035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.870063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.870154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.870183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.870270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.870298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.870391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.870426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.870552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.870581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.870718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.870784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.870944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.870981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.871095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.871131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.871283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.871312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.871436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.871464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.871587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.871615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.871751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.871785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.871923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.871957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.872126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.872160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.872263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.872291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.872439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.872467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.872589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.872617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.872718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.872749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.872898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.872951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.873099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.873144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.873259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.873287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.873377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.873405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.873536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.873564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.873694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.873743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.873893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.873941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.874062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.874089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.874202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.874229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.874324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.874352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.556 qpair failed and we were unable to recover it. 00:30:31.556 [2024-12-05 14:02:02.874499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.556 [2024-12-05 14:02:02.874534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.874696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.874747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.874889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.874935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.875025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.875055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.875184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.875215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.875339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.875367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.875542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.875577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.875725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.875758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.875892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.875925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.876096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.876131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.876247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.876276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.876400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.876437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.876572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.876619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.876752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.876800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.876929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.876962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.877123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.877171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.877259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.877288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.877409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.877448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.877544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.877574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.877669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.877697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.877846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.877874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.878022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.878050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.878153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.878188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.878331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.878366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.878531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.878573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.878685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.878721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.878866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.878902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.879019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.879054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.879222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.879258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.879365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.879393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.879513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.879544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.879678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.879725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.879870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.879904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.880100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.557 [2024-12-05 14:02:02.880134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.557 qpair failed and we were unable to recover it. 00:30:31.557 [2024-12-05 14:02:02.880286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.880314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.880398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.880433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.880592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.880620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.880708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.880760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.880908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.880942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.881081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.881117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.881224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.881259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.881370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.881405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.881532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.881578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.881722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.881757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.881897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.881931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.882072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.882106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.882236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.882270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.882412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.882480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.882577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.882628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.882735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.882769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.882901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.882934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.883058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.883100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.883257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.883286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.883382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.883411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.883558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.883608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.883729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.883766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.883886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.883920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.884022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.884063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.884235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.884269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.884377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.884412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.884589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.884617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.884730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.884766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.884882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.884916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.885018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.885053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.885159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.885193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.885314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.885342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.885428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.885460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.885581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.885609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.885699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.885727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.885846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.885880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.558 [2024-12-05 14:02:02.886021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.558 [2024-12-05 14:02:02.886056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.558 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.886178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.886222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.886385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.886413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.886515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.886543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.886661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.886689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.886840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.886874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.887072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.887107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.887237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.887272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.887383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.887425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.887570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.887620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.887759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.887793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.887899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.887944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.888124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.888158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.888280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.888308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.888467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.888496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.888582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.888609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.888696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.888725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.888821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.888856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.888977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.889021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.889190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.889224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.889323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.889357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.889507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.889535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.889655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.889683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.889800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.889836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.889950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.889997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.890139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.890175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.890290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.890324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.890451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.890485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.890578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.890606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.890747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.890781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.890974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.891008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.891115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.891149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.891282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.891316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.891461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.891490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.891585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.891613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.891712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.891745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.891886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.559 [2024-12-05 14:02:02.891920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.559 qpair failed and we were unable to recover it. 00:30:31.559 [2024-12-05 14:02:02.892090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.892124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.892265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.892298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.892444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.892474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.892588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.892616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.892712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.892740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.892849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.892883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.892992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.893026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.893119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.893153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.893316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.893357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.893459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.893488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.893633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.893682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.893798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.893845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.893985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.894033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.894182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.894210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.894296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.894323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.894448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.894476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.894568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.894597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.894739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.894768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.894852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.894880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.894969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.894998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.895097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.895125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.895214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.895241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.895347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.895388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.895515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.895546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.895670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.895698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.895873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.895908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.896027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.896061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.896200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.896234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.896379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.896407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.896531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.896581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.896733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.896786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.896870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.896897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.897036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.897082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.897196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.897223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.897345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.897373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.897509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.560 [2024-12-05 14:02:02.897570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.560 qpair failed and we were unable to recover it. 00:30:31.560 [2024-12-05 14:02:02.897664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.897693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.897783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.897832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.897933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.897968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.898152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.898180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.898305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.898335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.898479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.898515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.898619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.898647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.898804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.898853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.898943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.898970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.899078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.899106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.899220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.899247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.899359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.899386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.899479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.899506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.899595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.899623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.899748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.899776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.899872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.899900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.899992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.900021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.900147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.900177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.900327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.900356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.900490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.900522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.900622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.900651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.900795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.900823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.900945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.900975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.901096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.901129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.901265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.901299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.901413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.901468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.901601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.901636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.901768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.901796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.901910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.901944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.902080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.902114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.902236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.902271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.902410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.902468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.902655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.902705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.902847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.902896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.903038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.903090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.903236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.561 [2024-12-05 14:02:02.903264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.561 qpair failed and we were unable to recover it. 00:30:31.561 [2024-12-05 14:02:02.903403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.903457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.903574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.903601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.903722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.903750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.903843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.903870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.903950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.903977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.904075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.904104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.904192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.904220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.904315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.904342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.904443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.904471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.904614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.904642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.904769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.904797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.904917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.904945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.905045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.905073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.905203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.905245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.905375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.905406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.905564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.905599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.905695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.905729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.905872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.905909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.906025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.906059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.906228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.906263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.906392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.906434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.906602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.906630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.906770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.906803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.906943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.906977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.907121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.907156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.907312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.907341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.907471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.907500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.907636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.907683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.907855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.907901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.908007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.908040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.908171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.908198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.562 [2024-12-05 14:02:02.908291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.562 [2024-12-05 14:02:02.908320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.562 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.908446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.908477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.908611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.908641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.908795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.908830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.908945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.908979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.909152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.909186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.909323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.909351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.909482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.909515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.909652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.909702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.909831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.909869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.909960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.909989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.910107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.910134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.910262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.910290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.910398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.910468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.910615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.910650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.910755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.910802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.910941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.910975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.911125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.911159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.911292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.911326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.911453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.911482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.911629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.911676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.911773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.911820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.911975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.912022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.912167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.912197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.912283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.912311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.912402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.912438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.912563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.912592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.912737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.912779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.912912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.912943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.913094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.913123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.913216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.913244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.913381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.913433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.913560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.913590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.913768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.913803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.913922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.913958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.914077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.563 [2024-12-05 14:02:02.914111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.563 qpair failed and we were unable to recover it. 00:30:31.563 [2024-12-05 14:02:02.914259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.914293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.914446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.914475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.914562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.914611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.914795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.914829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.914937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.914971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.915141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.915178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.915350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.915378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.915538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.915567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.915651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.915679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.915856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.915891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.916044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.916093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.916193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.916234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.916342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.916370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.916527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.916555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.916652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.916683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.916905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.916939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.917049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.917085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.917190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.917225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.917352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.917380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.917516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.917544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.917638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.917666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.917786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.917815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.917910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.917960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.918075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.918104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.918255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.918299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.918440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.918490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.918592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.918622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.918741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.918794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.918915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.918950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.919124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.919158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.919296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.919324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.919428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.919466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.919590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.919619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.919744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.919804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.919912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.919941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.920115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.920163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.920259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.920286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.564 [2024-12-05 14:02:02.920380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.564 [2024-12-05 14:02:02.920408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.564 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.920543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.920572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.920729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.920756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.920854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.920882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.920976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.921004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.921130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.921160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.921258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.921286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.921373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.921401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.921518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.921546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.921663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.921694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.921818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.921846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.921943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.921973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.922072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.922100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.922199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.922227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.922320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.922356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.922481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.922510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.922627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.922655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.922752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.922779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.922863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.922890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.922979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.923006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.923104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.923133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.923252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.923295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.923453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.923492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.923644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.923673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.923832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.923860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.923985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.924012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.924099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.924127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.924223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.924253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.924350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.924380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.924480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.924508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.924659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.924693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.924834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.924869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.925038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.925073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.925252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.925298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.925426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.925454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.925594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.925641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.925725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.925753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.925900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.565 [2024-12-05 14:02:02.925945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.565 qpair failed and we were unable to recover it. 00:30:31.565 [2024-12-05 14:02:02.926047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.926075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.926161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.926189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.926274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.926301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.926411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.926474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.926611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.926641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.926763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.926792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.926925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.926959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.927074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.927109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.927248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.927283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.927436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.927465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.927571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.927604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.927713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.927741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.927847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.927895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.927996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.928029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.928180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.928207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.928329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.928359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.928501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.928535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.928651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.928679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.928865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.928899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.929040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.929074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.929211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.929244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.929411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.929448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.929597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.929624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.929753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.929802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.929919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.929953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.930054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.930087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.930240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.930288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.930436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.930487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.930586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.930614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.930774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.930808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.930991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.931025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.931131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.931163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.931286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.931315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.931437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.931466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.566 qpair failed and we were unable to recover it. 00:30:31.566 [2024-12-05 14:02:02.931552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.566 [2024-12-05 14:02:02.931579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.931670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.931697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.931794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.931821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.931934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.931962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.932077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.932104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.932229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.932256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.932348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.932375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.932473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.932502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.932615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.932657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.932812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.932848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.932966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.932995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.933117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.933145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.933230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.933257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d24000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.933426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.933468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.933600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.933635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.933742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.933776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.933913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.933945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.934080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.934112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.934216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.934263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.934391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.934427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.934533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.934577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.934745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.934779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.934895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.934928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.935041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.935074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.935179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.935212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.935327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.935359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.935510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.935537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.935651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.935678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.935797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.935824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.935977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.936009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.936174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.936207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.936306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.936339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.936511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.936542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.936691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.936740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.936911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.936959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.937094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.937144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.937258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.937290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.937379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.937406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.567 qpair failed and we were unable to recover it. 00:30:31.567 [2024-12-05 14:02:02.937541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.567 [2024-12-05 14:02:02.937587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.937715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.937741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.937857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.937884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.937999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.938025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.938127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.938153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.938243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.938270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.938352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.938378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.938483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.938511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.938593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.938620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.938786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.938820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.938963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.938994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.939097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.939131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.939257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.939286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.939405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.939438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.939542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.939570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.939660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.939689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.939809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.939857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.939947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.939976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.940099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.940125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.940246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.940288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.940465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.940503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.940626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.940661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.940766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.940800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.940913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.940945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.941079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.941143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.941328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.941356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.941477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.941506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.941720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.941775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.941939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.941988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.942206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.942258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.942361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.942388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.942496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.942560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.942846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.942910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.943208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.943272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.943442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.943470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.943593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.943619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.943768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.943810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.568 [2024-12-05 14:02:02.944056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.568 [2024-12-05 14:02:02.944119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.568 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.944351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.944378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.944518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.944546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.944635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.944661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.944758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.944784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.944925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.944986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.945245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.945309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.945530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.945557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.945658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.945684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.945901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.945971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.946327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.946390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.946561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.946588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.946714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.946740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.946881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.946952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.947206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.947270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.947561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.947593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.947740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.947802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.948004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.948066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.948308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.948372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.948578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.948606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.948742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.948804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.949046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.949112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.949375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.949437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.949612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.949639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.949757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.949783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.949901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.949968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.950288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.950351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.950533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.950560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.950652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.950678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.950871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.950912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.951044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.951091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.951228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.951268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.951435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.951509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.951658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.951699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.951843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.951890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.952071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.952115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.952306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.569 [2024-12-05 14:02:02.952334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.569 qpair failed and we were unable to recover it. 00:30:31.569 [2024-12-05 14:02:02.952431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.952459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.952611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.952639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.952766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.952794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.952920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.952949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.953101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.953130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.953254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.953282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.953430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.953457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.953562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.953589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.953762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.953823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.954058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.954122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.954331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.954358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.954480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.954506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.954619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.954646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.954797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.954860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.955045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.955109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.955358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.955400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.955541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.955571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.955806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.955861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.956040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.956101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.956263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.956314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.956435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.956463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.956688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.956748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.956916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.956971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.957136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.957190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.957315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.957342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.957461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.957559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.957902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.957999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.958276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.958345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.958601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.958631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.958784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.958854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.959144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.959172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.959351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.959379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.959528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.959558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.959677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.959746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.960012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.960078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.960367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.960395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.960509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.570 [2024-12-05 14:02:02.960538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.570 qpair failed and we were unable to recover it. 00:30:31.570 [2024-12-05 14:02:02.960661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.960690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.960844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.960909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.961121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.961186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.961458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.961501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.961634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.961664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.961841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.961901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.962080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.962148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.962364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.962463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.962619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.962653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.962838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.962904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.963123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.963191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.963488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.963517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.963713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.963774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.963864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.963893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.963992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.964020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.964109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.964137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.964233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.964261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.964351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.964378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.964507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.964535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.964658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.964685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.964784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.964811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.964897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.964924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.965020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.965048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.965171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.965199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.965325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.965356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.965447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.965485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.965616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.965645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.965888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.965917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.966066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.966093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.966180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.966209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.966329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.966358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.966478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.966507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.966629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.966657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.571 [2024-12-05 14:02:02.966778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.571 [2024-12-05 14:02:02.966842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.571 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.967080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.967124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.967284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.967329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.967555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.967583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.967756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.967826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.968077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.968144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.968385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.968413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.968514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.968543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.968670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.968737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.969005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.969034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.969323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.969388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.969633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.969661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.969811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.969840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.969962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.970043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.970320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.970363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.970538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.970571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.970694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.970724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.970976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.971041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.971209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.971286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.971526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.971555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.971677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.971705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.971904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.971932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.972018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.972046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.972324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.972390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.972559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.972589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.972689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.972717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.972896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.972961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.973136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.973211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.973495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.973524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.973658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.973687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.973938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.974002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.974245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.974309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.974576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.974604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.974723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.974750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.974848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.974876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.974999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.975028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.975253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.975318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.975570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.975599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.572 qpair failed and we were unable to recover it. 00:30:31.572 [2024-12-05 14:02:02.975714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.572 [2024-12-05 14:02:02.975773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.976129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.976194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.976447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.976514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.976731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.976798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.977058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.977123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.977379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.977467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.977728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.977795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.977999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.978064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.978259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.978328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.978599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.978665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.978916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.978961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.979099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.979142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.979330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.979395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.979667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.979736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.979876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.979920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.980211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.980275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.980511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.980556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.980707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.980758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.981040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.981105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.981370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.981468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.981731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.981799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.982031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.982096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.982333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.982377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.982609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.982675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.982962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.983027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.983261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.983330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.983637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.983703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.983939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.984004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.984195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.984260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.984514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.984581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.984840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.984906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.985152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.985218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.985456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.985523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.985776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.985841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.986041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.986106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.986372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.986453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.986685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.986750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.986968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.987036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.573 qpair failed and we were unable to recover it. 00:30:31.573 [2024-12-05 14:02:02.987305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.573 [2024-12-05 14:02:02.987369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.987623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.987667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.987817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.987861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.988135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.988203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.988460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.988530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.988776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.988839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.989146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.989212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.989462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.989530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.989786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.989850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.990062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.990127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.990376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.990453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.990697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.990763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.990964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.991029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.991298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.991364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.991573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.991640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.991925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.991992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2356635 Killed "${NVMF_APP[@]}" "$@" 00:30:31.574 [2024-12-05 14:02:02.992264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.992338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.992568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.992637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.992928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.992993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 14:02:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.993266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.993310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 14:02:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:31.574 [2024-12-05 14:02:02.993462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.993532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 14:02:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:31.574 [2024-12-05 14:02:02.993818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.993884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 14:02:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.574 [2024-12-05 14:02:02.994147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.994213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 14:02:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.574 [2024-12-05 14:02:02.994479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.994547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.994805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.994849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.995046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.995117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.995414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.995507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.995771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.995837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.996098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.996167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.996461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.996529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.996800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.996865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.997122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.997188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.997450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.997525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.997772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.997837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.998061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.998126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.574 [2024-12-05 14:02:02.998365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.574 [2024-12-05 14:02:02.998446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.574 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:02.998730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:02.998800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:02.999058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:02.999123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:02.999383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:02.999480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 14:02:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2357196 00:30:31.575 14:02:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:31.575 [2024-12-05 14:02:02.999744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 14:02:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2357196 00:30:31.575 [2024-12-05 14:02:02.999813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 14:02:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2357196 ']' 00:30:31.575 [2024-12-05 14:02:03.000066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.000132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 14:02:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.575 [2024-12-05 14:02:03.000358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 14:02:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.575 [2024-12-05 14:02:03.000443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.575 [2024-12-05 14:02:03.000669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.575 [2024-12-05 14:02:03.000739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.575 [2024-12-05 14:02:03.000964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.001033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.001268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.001314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.001495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.001541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.001746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.001825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.002035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.002109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.002406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.002490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.002740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.002806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.003040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.003084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.003265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.003346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.003694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.003740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.003910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.003954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.004192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.004258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.004549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.004617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.004875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.004943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.005145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.005214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.005478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.005545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.005766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.005832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.006114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.006182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.006406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.006487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.006770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.006836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.007125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.007192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.007447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.007516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.007758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.007836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.008092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.008160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.008439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.008518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.575 [2024-12-05 14:02:03.008815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.575 [2024-12-05 14:02:03.008881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.575 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.009138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.009203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.009495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.009564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.009816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.009880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.010156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.010199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.010335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.010379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.010539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.010618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.010863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.010929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.011216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.011280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.011543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.011611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.011903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.011969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.012252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.012318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.012639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.012706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.012931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.012996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.013223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.013290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.013599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.013666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.013920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.013986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.014275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.014340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.014611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.014679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.014968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.015035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.015325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.015390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.015611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.015654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.015827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.015893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.016200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.016266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.016483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.016562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.016821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.016887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.017141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.017206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.017466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.017535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.017800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.017865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.018111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.018179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.018470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.018537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.018792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.018859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.019111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.576 [2024-12-05 14:02:03.019177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.576 qpair failed and we were unable to recover it. 00:30:31.576 [2024-12-05 14:02:03.019447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.019515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.019721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.019789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.019998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.020068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.020326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.020392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.020663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.020746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.020998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.021063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.021351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.021432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.021671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.021716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.021875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.021919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.022067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.022143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.022396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.022477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.022736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.022804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.023088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.023154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.023352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.023434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.023679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.023744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.023954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.024021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.024253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.024318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.024643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.024712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.024973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.025040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.025327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.025392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.025666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.025732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.025985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.026054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.026313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.026378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.026657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.026726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.026964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.027031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.027231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.027297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.027579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.027646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.027887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.027954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.028239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.028304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.028554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.028624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.028878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.028944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.029215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.029284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.029584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.029652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.029953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.030019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.030296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.030363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.030663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.030730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.030980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.577 [2024-12-05 14:02:03.031049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.577 qpair failed and we were unable to recover it. 00:30:31.577 [2024-12-05 14:02:03.031334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.031379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.031541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.031625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.031873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.031938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.032207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.032272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.032550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.032617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.032863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.032927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.033214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.033279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.033544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.033622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.033923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.033988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.034277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.034342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.034657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.034723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.034962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.035027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.035277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.035346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.035577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.035622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.035802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.035878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.036125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.036191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.036397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.036479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.036782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.036847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.037098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.037164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.037388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.037471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.037741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.037806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.038065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.038133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.038393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.038476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.038783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.038849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.039061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.039126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.039354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.039433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.039643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.039714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.039977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.040022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.040202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.040265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.040531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.040597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.040850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.040915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.041176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.041245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.041536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.041603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.041895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.041960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.042216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.042260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.042437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.042504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.042762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.042826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.043082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.578 [2024-12-05 14:02:03.043148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.578 qpair failed and we were unable to recover it. 00:30:31.578 [2024-12-05 14:02:03.043400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.043479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.043738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.043803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.044012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.044077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.044279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.044345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.044557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.044626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.044871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.044936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.045184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.045227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.045374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.045439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.045743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.045809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.046066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.046142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.046358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.046443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.046655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.046720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.046974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.047017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.047177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.047221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.047370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.047467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.047740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.047805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.048093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.048159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.048469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.048536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.048843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.048909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.049205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.049271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.049570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.049645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.579 qpair failed and we were unable to recover it. 00:30:31.579 [2024-12-05 14:02:03.049940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.579 [2024-12-05 14:02:03.049984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.050148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.050193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.050402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.050500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.050657] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:30:31.843 [2024-12-05 14:02:03.050752] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.843 [2024-12-05 14:02:03.050751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.050815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.051055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.051121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.051369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.051450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.051705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.051774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.052066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.052132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.052383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.052464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.052681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.052750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.053036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.053081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.053222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.053266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.053491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.053559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.053830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.053898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.054126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.054192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.054404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.054486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.054701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.054769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.054987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.055055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.055309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.055375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.055681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.055747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.056009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.056074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.056344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.056388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.056617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.056685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.056975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.057043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.057351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.057436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.057659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.057727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.057962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.058028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.843 qpair failed and we were unable to recover it. 00:30:31.843 [2024-12-05 14:02:03.058259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.843 [2024-12-05 14:02:03.058336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.058615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.058681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.058987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.059052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.059317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.059383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.059658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.059726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.060033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.060076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.060205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.060248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.060460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.060527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.060748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.060814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.061101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.061166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.061415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.061503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.061797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.061863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.062123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.062189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.062457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.062502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.062752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.062818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.063023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.063091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.063334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.063378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.063536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.063581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.063759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.063834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.064138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.064204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.064406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.064504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.064778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.064844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.065040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.065105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.065345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.065411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.065724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.065789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.066035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.066102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.066311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.066378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.066659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.066725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.067004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.067048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.067246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.067315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.067557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.067624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.067886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.067951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.068149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.068214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.068445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.068512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.068805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.068870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.069121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.069187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.069459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.069527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.069780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.844 [2024-12-05 14:02:03.069846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.844 qpair failed and we were unable to recover it. 00:30:31.844 [2024-12-05 14:02:03.070076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.070141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.070377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.070457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.070702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.070780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.071037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.071107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.071360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.071442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.071699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.071769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.071978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.072045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.072331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.072397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.072727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.072793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.073022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.073088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.073344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.073390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.073603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.073670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.073910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.073975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.074267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.074332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.074592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.074659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.074918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.074986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.075295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.075360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.075633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.075702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.075947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.076015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.076223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.076291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.076538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.076586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.076737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.076781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.076982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.077026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.077208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.077275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.077502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.077548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.077755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.077799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.077933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.077979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.078133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.078177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.078358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.078402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.078595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.078641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.078786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.078831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.079018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.079064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.079279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.079323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.079502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.079538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.079652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.079686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.079858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.079892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.080008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.080044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.080249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.845 [2024-12-05 14:02:03.080316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.845 qpair failed and we were unable to recover it. 00:30:31.845 [2024-12-05 14:02:03.080544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.080579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.080746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.080790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.080989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.081054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.081282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.081348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.081575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.081619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.081766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.081801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.081932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.081966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.082084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.082117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.082237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.082272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.082380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.082415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.082550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.082584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.082724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.082758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.082897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.082933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.083049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.083084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.083187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.083222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.083328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.083362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.083471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.083507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.083625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.083659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.083774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.083810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.083930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.083965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.084069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.084104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.084243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.084278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.084410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.084472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.084615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.084648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.084817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.084851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.084968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.085001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.085168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.085201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.085303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.085336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.085477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.085511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.085671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.085704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.085820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.085853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.085963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.085998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.086108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.086142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.086281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.086314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.086456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.086490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.086597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.086632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.086745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.086778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.086884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.846 [2024-12-05 14:02:03.086917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.846 qpair failed and we were unable to recover it. 00:30:31.846 [2024-12-05 14:02:03.087057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.087089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.087223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.087256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.087354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.087388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.087530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.087562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.087697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.087729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.087851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.087882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.087981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.088012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.088093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.088120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.088223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.088269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.088384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.088412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.088512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.088539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.088624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.088650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.088742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.088769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.088893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.088920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.089025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.089051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.089139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.089166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.089244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.089270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.089354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.089380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.089529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.089556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.089645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.089673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.089788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.089815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.089914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.089940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.090054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.090080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.090191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.090217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.090308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.090335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.090429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.090456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.090547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.090574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.090667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.090693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.090775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.090801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.090930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.090956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.091048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.091074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.091150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.091177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.847 qpair failed and we were unable to recover it. 00:30:31.847 [2024-12-05 14:02:03.091289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.847 [2024-12-05 14:02:03.091315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.091396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.091428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.091522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.091549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.091639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.091666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.091776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.091802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.091896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.091923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.092035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.092061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.092178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.092205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.092329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.092368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.092483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.092511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.092601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.092627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.092753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.092779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.092871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.092896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.092979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.093005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.093087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.093114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.093200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.093226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.093310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.093336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.093429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.093457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.093541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.093568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.093684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.093711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.093830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.093857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.093964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.093990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.094084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.094110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.094224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.094250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.094361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.094386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.094521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.094547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.094657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.094682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.094770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.094795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.094882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.094908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.094996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.095022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.095134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.095160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.095250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.095275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.095377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.095406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.095494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.095520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.095632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.095659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.095773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.095799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.095881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.095907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.096000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.096027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.848 [2024-12-05 14:02:03.096110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.848 [2024-12-05 14:02:03.096136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.848 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.096224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.096250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.096335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.096361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.096487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.096519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.096604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.096630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.096744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.096770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.096847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.096875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.096988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.097015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.097126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.097152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.097260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.097287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.097366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.097392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.097494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.097522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.097609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.097636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.097738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.097764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.097849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.097875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.097987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.098012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.098127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.098153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.098244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.098270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.098382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.098411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.098512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.098537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.098623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.098648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.098735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.098761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.098839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.098864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.098975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.099000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.099084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.099112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.099198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.099226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.099311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.099338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.099455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.099482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.099592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.099619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.099694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.099720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.099803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.099833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.099932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.099958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.100034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.100060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.100143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.100168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.100255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.100281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.100369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.100393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.100495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.100533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.100628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.100655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.100760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.849 [2024-12-05 14:02:03.100787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.849 qpair failed and we were unable to recover it. 00:30:31.849 [2024-12-05 14:02:03.100901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.100927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.101017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.101043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.101124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.101152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.101230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.101256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.101373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.101399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.101504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.101532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.101643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.101668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.101762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.101787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 A controller has encountered a failure and is being reset. 00:30:31.850 [2024-12-05 14:02:03.101913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.101940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.102027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.102054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.102136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.102162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.102256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.102281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.102431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.102459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.102548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.102574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.102670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.102695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.102807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.102833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.102927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.102952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d30000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.103043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.103071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.103172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.103198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d28000b90 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.103296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.103325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.103414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.103446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.103525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.103550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.103638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.103664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.103754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.103779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315fa0 with addr=10.0.0.2, port=4420 00:30:31.850 qpair failed and we were unable to recover it. 00:30:31.850 [2024-12-05 14:02:03.103915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.850 [2024-12-05 14:02:03.103951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1323f30 with addr=10.0.0.2, port=4420 00:30:31.850 [2024-12-05 14:02:03.103971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1323f30 is same with the state(6) to be set 00:30:31.850 [2024-12-05 14:02:03.103996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1323f30 (9): Bad file descriptor 00:30:31.850 [2024-12-05 14:02:03.104016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:31.850 [2024-12-05 14:02:03.104030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:31.850 [2024-12-05 14:02:03.104046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:31.850 Unable to reset the controller. 00:30:31.850 [2024-12-05 14:02:03.127618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:31.850 [2024-12-05 14:02:03.188548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.850 [2024-12-05 14:02:03.188608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.850 [2024-12-05 14:02:03.188623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.850 [2024-12-05 14:02:03.188635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.850 [2024-12-05 14:02:03.188646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.850 [2024-12-05 14:02:03.190192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:31.850 [2024-12-05 14:02:03.190244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:31.850 [2024-12-05 14:02:03.190293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:31.850 [2024-12-05 14:02:03.190296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:31.850 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:31.850 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:31.850 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:31.850 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:31.850 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.850 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.850 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:31.850 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.850 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.112 Malloc0 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.112 [2024-12-05 14:02:03.388275] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.112 [2024-12-05 14:02:03.416574] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.112 14:02:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2356670 00:30:32.680 Controller properly reset. 00:30:37.945 Initializing NVMe Controllers 00:30:37.945 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:37.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:37.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:37.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:37.945 Initialization complete. Launching workers. 00:30:37.945 Starting thread on core 1 00:30:37.945 Starting thread on core 2 00:30:37.945 Starting thread on core 3 00:30:37.945 Starting thread on core 0 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:37.945 00:30:37.945 real 0m10.705s 00:30:37.945 user 0m35.012s 00:30:37.945 sys 0m7.133s 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:37.945 ************************************ 00:30:37.945 END TEST nvmf_target_disconnect_tc2 00:30:37.945 ************************************ 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.945 rmmod nvme_tcp 00:30:37.945 rmmod nvme_fabrics 00:30:37.945 rmmod nvme_keyring 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2357196 ']' 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2357196 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2357196 ']' 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2357196 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2357196 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2357196' 00:30:37.945 killing process with pid 2357196 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2357196 00:30:37.945 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2357196 00:30:38.205 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:38.205 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:38.205 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:38.205 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:38.205 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:38.205 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:38.205 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:38.205 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.205 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:38.205 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.205 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.205 14:02:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.118 14:02:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:40.118 00:30:40.118 real 0m15.707s 00:30:40.118 user 1m0.519s 00:30:40.118 sys 0m9.614s 00:30:40.118 14:02:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:40.118 14:02:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:40.118 ************************************ 00:30:40.118 END TEST nvmf_target_disconnect 00:30:40.118 ************************************ 00:30:40.118 14:02:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:40.118 00:30:40.118 real 5m5.678s 00:30:40.118 user 11m7.509s 00:30:40.118 sys 1m15.158s 00:30:40.118 14:02:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:40.118 14:02:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.118 ************************************ 00:30:40.118 END TEST nvmf_host 00:30:40.118 ************************************ 00:30:40.377 14:02:11 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:40.377 14:02:11 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:40.377 14:02:11 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:40.377 14:02:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:40.377 14:02:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:40.377 14:02:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:40.377 ************************************ 00:30:40.377 START TEST nvmf_target_core_interrupt_mode 00:30:40.377 ************************************ 00:30:40.377 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:40.377 * Looking for test storage... 00:30:40.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:40.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.378 --rc genhtml_branch_coverage=1 00:30:40.378 --rc genhtml_function_coverage=1 00:30:40.378 --rc genhtml_legend=1 00:30:40.378 --rc geninfo_all_blocks=1 00:30:40.378 --rc geninfo_unexecuted_blocks=1 00:30:40.378 00:30:40.378 ' 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:40.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.378 --rc genhtml_branch_coverage=1 00:30:40.378 --rc genhtml_function_coverage=1 00:30:40.378 --rc genhtml_legend=1 00:30:40.378 --rc geninfo_all_blocks=1 00:30:40.378 --rc geninfo_unexecuted_blocks=1 00:30:40.378 00:30:40.378 ' 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:40.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.378 --rc genhtml_branch_coverage=1 00:30:40.378 --rc genhtml_function_coverage=1 00:30:40.378 --rc genhtml_legend=1 00:30:40.378 --rc geninfo_all_blocks=1 00:30:40.378 --rc geninfo_unexecuted_blocks=1 00:30:40.378 00:30:40.378 ' 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:40.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.378 --rc genhtml_branch_coverage=1 00:30:40.378 --rc genhtml_function_coverage=1 00:30:40.378 --rc genhtml_legend=1 00:30:40.378 --rc geninfo_all_blocks=1 00:30:40.378 --rc geninfo_unexecuted_blocks=1 00:30:40.378 00:30:40.378 ' 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:40.378 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:40.379 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:40.379 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:40.379 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:40.379 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:40.379 ************************************ 00:30:40.379 START TEST nvmf_abort 00:30:40.379 ************************************ 00:30:40.379 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:40.379 * Looking for test storage... 00:30:40.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:40.379 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:40.379 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:30:40.379 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:40.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.640 --rc genhtml_branch_coverage=1 00:30:40.640 --rc genhtml_function_coverage=1 00:30:40.640 --rc genhtml_legend=1 00:30:40.640 --rc geninfo_all_blocks=1 00:30:40.640 --rc geninfo_unexecuted_blocks=1 00:30:40.640 00:30:40.640 ' 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:40.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.640 --rc genhtml_branch_coverage=1 00:30:40.640 --rc genhtml_function_coverage=1 00:30:40.640 --rc genhtml_legend=1 00:30:40.640 --rc geninfo_all_blocks=1 00:30:40.640 --rc geninfo_unexecuted_blocks=1 00:30:40.640 00:30:40.640 ' 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:40.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.640 --rc genhtml_branch_coverage=1 00:30:40.640 --rc genhtml_function_coverage=1 00:30:40.640 --rc genhtml_legend=1 00:30:40.640 --rc geninfo_all_blocks=1 00:30:40.640 --rc geninfo_unexecuted_blocks=1 00:30:40.640 00:30:40.640 ' 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:40.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.640 --rc genhtml_branch_coverage=1 00:30:40.640 --rc genhtml_function_coverage=1 00:30:40.640 --rc genhtml_legend=1 00:30:40.640 --rc geninfo_all_blocks=1 00:30:40.640 --rc geninfo_unexecuted_blocks=1 00:30:40.640 00:30:40.640 ' 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.640 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:40.641 14:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.543 14:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:42.543 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:42.543 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:42.543 Found net devices under 0000:09:00.0: cvl_0_0 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:42.543 Found net devices under 0000:09:00.1: cvl_0_1 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.543 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.544 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.544 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:42.544 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.544 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.544 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:42.544 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:42.544 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.544 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.544 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:42.544 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:42.544 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.544 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:42.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:30:42.803 00:30:42.803 --- 10.0.0.2 ping statistics --- 00:30:42.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.803 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:30:42.803 00:30:42.803 --- 10.0.0.1 ping statistics --- 00:30:42.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.803 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2359915 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2359915 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2359915 ']' 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.803 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.803 [2024-12-05 14:02:14.216171] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:42.803 [2024-12-05 14:02:14.217348] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:30:42.804 [2024-12-05 14:02:14.217437] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.804 [2024-12-05 14:02:14.290755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:43.061 [2024-12-05 14:02:14.344670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.061 [2024-12-05 14:02:14.344722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.061 [2024-12-05 14:02:14.344750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.061 [2024-12-05 14:02:14.344761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.061 [2024-12-05 14:02:14.344771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.061 [2024-12-05 14:02:14.346243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.061 [2024-12-05 14:02:14.346307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.061 [2024-12-05 14:02:14.346311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.061 [2024-12-05 14:02:14.431169] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:43.061 [2024-12-05 14:02:14.431389] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:43.062 [2024-12-05 14:02:14.431407] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:43.062 [2024-12-05 14:02:14.431659] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:43.062 [2024-12-05 14:02:14.483041] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:43.062 Malloc0 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:43.062 Delay0 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:43.062 [2024-12-05 14:02:14.555189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.062 14:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:43.321 [2024-12-05 14:02:14.703498] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:45.858 Initializing NVMe Controllers 00:30:45.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:45.858 controller IO queue size 128 less than required 00:30:45.858 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:45.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:45.858 Initialization complete. Launching workers. 00:30:45.858 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27051 00:30:45.858 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27108, failed to submit 66 00:30:45.858 success 27051, unsuccessful 57, failed 0 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:45.858 rmmod nvme_tcp 00:30:45.858 rmmod nvme_fabrics 00:30:45.858 rmmod nvme_keyring 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2359915 ']' 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2359915 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2359915 ']' 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2359915 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2359915 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2359915' 00:30:45.858 killing process with pid 2359915 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2359915 00:30:45.858 14:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2359915 00:30:45.858 14:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:45.858 14:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:45.858 14:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:45.858 14:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:45.858 14:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:45.858 14:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:45.858 14:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:45.858 14:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:45.858 14:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:45.858 14:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.858 14:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.858 14:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.769 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:47.769 00:30:47.769 real 0m7.368s 00:30:47.769 user 0m9.547s 00:30:47.769 sys 0m2.908s 00:30:47.769 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:47.769 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:47.769 ************************************ 00:30:47.769 END TEST nvmf_abort 00:30:47.769 ************************************ 00:30:47.769 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:47.769 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:47.769 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:47.769 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:47.769 ************************************ 00:30:47.769 START TEST nvmf_ns_hotplug_stress 00:30:47.769 ************************************ 00:30:47.769 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:48.028 * Looking for test storage... 00:30:48.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:48.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.028 --rc genhtml_branch_coverage=1 00:30:48.028 --rc genhtml_function_coverage=1 00:30:48.028 --rc genhtml_legend=1 00:30:48.028 --rc geninfo_all_blocks=1 00:30:48.028 --rc geninfo_unexecuted_blocks=1 00:30:48.028 00:30:48.028 ' 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:48.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.028 --rc genhtml_branch_coverage=1 00:30:48.028 --rc genhtml_function_coverage=1 00:30:48.028 --rc genhtml_legend=1 00:30:48.028 --rc geninfo_all_blocks=1 00:30:48.028 --rc geninfo_unexecuted_blocks=1 00:30:48.028 00:30:48.028 ' 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:48.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.028 --rc genhtml_branch_coverage=1 00:30:48.028 --rc genhtml_function_coverage=1 00:30:48.028 --rc genhtml_legend=1 00:30:48.028 --rc geninfo_all_blocks=1 00:30:48.028 --rc geninfo_unexecuted_blocks=1 00:30:48.028 00:30:48.028 ' 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:48.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.028 --rc genhtml_branch_coverage=1 00:30:48.028 --rc genhtml_function_coverage=1 00:30:48.028 --rc genhtml_legend=1 00:30:48.028 --rc geninfo_all_blocks=1 00:30:48.028 --rc geninfo_unexecuted_blocks=1 00:30:48.028 00:30:48.028 ' 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.028 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:48.029 14:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:50.587 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:50.587 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:50.587 Found net devices under 0000:09:00.0: cvl_0_0 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.587 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:50.588 Found net devices under 0000:09:00.1: cvl_0_1 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:30:50.588 00:30:50.588 --- 10.0.0.2 ping statistics --- 00:30:50.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.588 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:30:50.588 00:30:50.588 --- 10.0.0.1 ping statistics --- 00:30:50.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.588 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2362228 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2362228 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2362228 ']' 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.588 14:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:50.588 [2024-12-05 14:02:21.842874] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:50.588 [2024-12-05 14:02:21.843973] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:30:50.588 [2024-12-05 14:02:21.844026] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.588 [2024-12-05 14:02:21.916669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:50.588 [2024-12-05 14:02:21.973028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.588 [2024-12-05 14:02:21.973076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.588 [2024-12-05 14:02:21.973104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.588 [2024-12-05 14:02:21.973116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.588 [2024-12-05 14:02:21.973125] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.588 [2024-12-05 14:02:21.974496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.588 [2024-12-05 14:02:21.974553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.588 [2024-12-05 14:02:21.974558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.588 [2024-12-05 14:02:22.061007] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:50.588 [2024-12-05 14:02:22.061243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:50.588 [2024-12-05 14:02:22.061270] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:50.588 [2024-12-05 14:02:22.061530] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:50.588 14:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:50.588 14:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:50.588 14:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:50.588 14:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:50.588 14:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:50.588 14:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.588 14:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:50.588 14:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:50.849 [2024-12-05 14:02:22.355261] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.108 14:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:51.367 14:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.624 [2024-12-05 14:02:22.907614] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.624 14:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:51.882 14:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:52.139 Malloc0 00:30:52.139 14:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:52.398 Delay0 00:30:52.398 14:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.656 14:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:52.914 NULL1 00:30:52.914 14:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:53.170 14:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2362641 00:30:53.170 14:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:53.170 14:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:30:53.171 14:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.545 Read completed with error (sct=0, sc=11) 00:30:54.545 14:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.803 14:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:54.803 14:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:55.060 true 00:30:55.061 14:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:30:55.061 14:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:55.659 14:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.916 14:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:55.916 14:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:56.174 true 00:30:56.174 14:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:30:56.174 14:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.432 14:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.690 14:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:56.690 14:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:56.950 true 00:30:56.950 14:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:30:56.950 14:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.208 14:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.775 14:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:57.775 14:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:57.775 true 00:30:57.775 14:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:30:57.775 14:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.971 14:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.229 14:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:59.229 14:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:59.486 true 00:30:59.486 14:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:30:59.486 14:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.744 14:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.002 14:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:00.002 14:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:00.260 true 00:31:00.261 14:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:00.261 14:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.519 14:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.777 14:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:00.777 14:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:01.035 true 00:31:01.035 14:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:01.035 14:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.971 14:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.229 14:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:02.229 14:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:02.487 true 00:31:02.487 14:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:02.487 14:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.746 14:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.004 14:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:03.004 14:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:03.570 true 00:31:03.570 14:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:03.570 14:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.570 14:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.828 14:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:03.828 14:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:04.086 true 00:31:04.086 14:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:04.086 14:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.024 14:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.590 14:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:05.590 14:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:05.590 true 00:31:05.590 14:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:05.590 14:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.848 14:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.417 14:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:06.417 14:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:06.417 true 00:31:06.417 14:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:06.417 14:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.675 14:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.933 14:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:06.933 14:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:07.208 true 00:31:07.208 14:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:07.208 14:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:08.141 14:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:08.400 14:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:08.400 14:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:08.659 true 00:31:08.659 14:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:08.659 14:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.917 14:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.175 14:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:09.175 14:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:09.433 true 00:31:09.433 14:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:09.433 14:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.690 14:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.254 14:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:10.254 14:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:10.254 true 00:31:10.254 14:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:10.254 14:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.631 14:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.631 14:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:11.631 14:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:11.889 true 00:31:11.889 14:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:11.889 14:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.147 14:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.404 14:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:12.404 14:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:12.662 true 00:31:12.662 14:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:12.662 14:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.920 14:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.178 14:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:13.178 14:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:13.436 true 00:31:13.436 14:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:13.436 14:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.370 14:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.627 14:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:14.627 14:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:14.884 true 00:31:14.884 14:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:14.884 14:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.506 14:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.506 14:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:15.506 14:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:15.781 true 00:31:15.781 14:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:15.781 14:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.039 14:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.297 14:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:16.297 14:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:16.555 true 00:31:16.555 14:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:16.555 14:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.489 14:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.747 14:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:17.747 14:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:18.004 true 00:31:18.004 14:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:18.004 14:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.262 14:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.520 14:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:18.520 14:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:18.777 true 00:31:19.035 14:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:19.035 14:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.294 14:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.552 14:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:19.552 14:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:19.810 true 00:31:19.810 14:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:19.810 14:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.748 14:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.006 14:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:21.007 14:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:21.264 true 00:31:21.264 14:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:21.265 14:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.522 14:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.780 14:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:21.780 14:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:22.038 true 00:31:22.038 14:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:22.038 14:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.296 14:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.554 14:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:22.554 14:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:22.812 true 00:31:22.812 14:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:22.812 14:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.747 14:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.748 Initializing NVMe Controllers 00:31:23.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:23.748 Controller IO queue size 128, less than required. 00:31:23.748 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:23.748 Controller IO queue size 128, less than required. 00:31:23.748 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:23.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:23.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:23.748 Initialization complete. Launching workers. 00:31:23.748 ======================================================== 00:31:23.748 Latency(us) 00:31:23.748 Device Information : IOPS MiB/s Average min max 00:31:23.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 527.33 0.26 98179.03 3053.13 1014214.12 00:31:23.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8182.98 4.00 15643.20 2887.65 447137.05 00:31:23.748 ======================================================== 00:31:23.748 Total : 8710.31 4.25 20640.00 2887.65 1014214.12 00:31:23.748 00:31:24.005 14:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:24.005 14:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:24.263 true 00:31:24.263 14:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2362641 00:31:24.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2362641) - No such process 00:31:24.263 14:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2362641 00:31:24.263 14:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.520 14:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:24.778 14:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:24.778 14:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:24.778 14:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:24.778 14:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:24.778 14:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:25.036 null0 00:31:25.036 14:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:25.036 14:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:25.036 14:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:25.294 null1 00:31:25.294 14:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:25.294 14:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:25.294 14:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:25.554 null2 00:31:25.554 14:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:25.554 14:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:25.554 14:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:25.812 null3 00:31:25.812 14:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:25.812 14:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:25.812 14:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:26.070 null4 00:31:26.070 14:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:26.070 14:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:26.070 14:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:26.328 null5 00:31:26.328 14:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:26.328 14:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:26.328 14:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:26.586 null6 00:31:26.586 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:26.586 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:26.586 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:26.846 null7 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2366656 2366657 2366659 2366661 2366663 2366665 2366667 2366669 00:31:26.846 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.847 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.847 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:27.105 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:27.105 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.105 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:27.364 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:27.364 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:27.364 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:27.364 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:27.364 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.622 14:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:27.881 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:27.881 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.881 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:27.881 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:27.881 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:27.881 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:27.881 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:27.881 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.139 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:28.398 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.398 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.398 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:28.398 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:28.398 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.398 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:28.398 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:28.398 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:28.398 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:28.398 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:28.657 14:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.657 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:28.915 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.915 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.915 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:28.915 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:28.916 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.174 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:29.174 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:29.174 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:29.174 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:29.174 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:29.174 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.433 14:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:29.691 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:29.691 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.691 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:29.691 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:29.691 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:29.691 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:29.691 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:29.691 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.949 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:29.950 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.950 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.950 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:30.207 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.207 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.207 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:30.207 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:30.207 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.207 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:30.207 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:30.207 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:30.207 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:30.208 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.466 14:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:30.724 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.724 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.724 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:30.724 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:30.724 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:30.724 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.724 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:30.983 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:30.983 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:30.983 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:30.983 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.241 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:31.499 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:31.499 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.499 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:31.499 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:31.499 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:31.499 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:31.499 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:31.499 14:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:31.756 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.756 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.756 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:31.756 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.756 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.756 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.757 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:32.014 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:32.014 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.014 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:32.014 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:32.014 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:32.014 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:32.014 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.271 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:32.529 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.529 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.529 14:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:32.529 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:32.529 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.529 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:32.529 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:32.529 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:32.785 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:32.785 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:32.785 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:33.042 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.042 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.042 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.042 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.042 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.042 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:33.043 rmmod nvme_tcp 00:31:33.043 rmmod nvme_fabrics 00:31:33.043 rmmod nvme_keyring 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2362228 ']' 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2362228 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2362228 ']' 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2362228 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2362228 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2362228' 00:31:33.043 killing process with pid 2362228 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2362228 00:31:33.043 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2362228 00:31:33.341 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:33.341 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:33.341 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:33.341 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:33.341 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:33.342 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:33.342 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:33.342 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:33.342 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:33.342 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.342 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.342 14:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:35.907 00:31:35.907 real 0m47.552s 00:31:35.907 user 3m21.184s 00:31:35.907 sys 0m21.630s 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:35.907 ************************************ 00:31:35.907 END TEST nvmf_ns_hotplug_stress 00:31:35.907 ************************************ 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:35.907 ************************************ 00:31:35.907 START TEST nvmf_delete_subsystem 00:31:35.907 ************************************ 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:35.907 * Looking for test storage... 00:31:35.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:35.907 14:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:35.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.907 --rc genhtml_branch_coverage=1 00:31:35.907 --rc genhtml_function_coverage=1 00:31:35.907 --rc genhtml_legend=1 00:31:35.907 --rc geninfo_all_blocks=1 00:31:35.907 --rc geninfo_unexecuted_blocks=1 00:31:35.907 00:31:35.907 ' 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:35.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.907 --rc genhtml_branch_coverage=1 00:31:35.907 --rc genhtml_function_coverage=1 00:31:35.907 --rc genhtml_legend=1 00:31:35.907 --rc geninfo_all_blocks=1 00:31:35.907 --rc geninfo_unexecuted_blocks=1 00:31:35.907 00:31:35.907 ' 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:35.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.907 --rc genhtml_branch_coverage=1 00:31:35.907 --rc genhtml_function_coverage=1 00:31:35.907 --rc genhtml_legend=1 00:31:35.907 --rc geninfo_all_blocks=1 00:31:35.907 --rc geninfo_unexecuted_blocks=1 00:31:35.907 00:31:35.907 ' 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:35.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.907 --rc genhtml_branch_coverage=1 00:31:35.907 --rc genhtml_function_coverage=1 00:31:35.907 --rc genhtml_legend=1 00:31:35.907 --rc geninfo_all_blocks=1 00:31:35.907 --rc geninfo_unexecuted_blocks=1 00:31:35.907 00:31:35.907 ' 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.907 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:35.908 14:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:37.810 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:37.810 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:37.810 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:37.810 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:37.810 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:37.810 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:37.810 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:37.810 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:37.810 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:37.810 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:37.810 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:37.810 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:37.810 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:37.810 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:37.811 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:37.811 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:37.811 Found net devices under 0000:09:00.0: cvl_0_0 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:37.811 Found net devices under 0000:09:00.1: cvl_0_1 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:37.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:37.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:31:37.811 00:31:37.811 --- 10.0.0.2 ping statistics --- 00:31:37.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.811 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:31:37.811 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:37.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:37.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:31:37.811 00:31:37.811 --- 10.0.0.1 ping statistics --- 00:31:37.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.812 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2370037 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2370037 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2370037 ']' 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:37.812 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:38.070 [2024-12-05 14:03:09.338485] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:38.070 [2024-12-05 14:03:09.339727] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:31:38.070 [2024-12-05 14:03:09.339797] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.070 [2024-12-05 14:03:09.418433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:38.070 [2024-12-05 14:03:09.476942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:38.070 [2024-12-05 14:03:09.477006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:38.070 [2024-12-05 14:03:09.477019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:38.070 [2024-12-05 14:03:09.477030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:38.070 [2024-12-05 14:03:09.477040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:38.070 [2024-12-05 14:03:09.478352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.070 [2024-12-05 14:03:09.478357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.070 [2024-12-05 14:03:09.568838] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:38.070 [2024-12-05 14:03:09.569120] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:38.070 [2024-12-05 14:03:09.571653] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:38.071 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:38.071 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:38.071 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:38.071 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:38.071 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:38.329 [2024-12-05 14:03:09.618993] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:38.329 [2024-12-05 14:03:09.635240] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:38.329 NULL1 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:38.329 Delay0 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2370180 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:38.329 14:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:38.329 [2024-12-05 14:03:09.714223] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:40.229 14:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:40.229 14:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.229 14:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 [2024-12-05 14:03:11.887819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0860 is same with the state(6) to be set 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 starting I/O failed: -6 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 [2024-12-05 14:03:11.889578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbd04000c40 is same with the state(6) to be set 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.487 Write completed with error (sct=0, sc=8) 00:31:40.487 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 [2024-12-05 14:03:11.890069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf04a0 is same with the state(6) to be set 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:40.488 Write completed with error (sct=0, sc=8) 00:31:40.488 Read completed with error (sct=0, sc=8) 00:31:41.423 [2024-12-05 14:03:12.849806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf19b0 is same with the state(6) to be set 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 [2024-12-05 14:03:12.888807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbd0400d020 is same with the state(6) to be set 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 [2024-12-05 14:03:12.888997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbd0400d800 is same with the state(6) to be set 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 [2024-12-05 14:03:12.891029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf02c0 is same with the state(6) to be set 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Write completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 Read completed with error (sct=0, sc=8) 00:31:41.423 [2024-12-05 14:03:12.892739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0680 is same with the state(6) to be set 00:31:41.423 Initializing NVMe Controllers 00:31:41.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:41.423 Controller IO queue size 128, less than required. 00:31:41.423 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:41.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:41.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:41.423 Initialization complete. Launching workers. 00:31:41.423 ======================================================== 00:31:41.423 Latency(us) 00:31:41.423 Device Information : IOPS MiB/s Average min max 00:31:41.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 149.94 0.07 951451.75 2259.19 1047561.36 00:31:41.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.37 0.08 918750.68 550.64 1012349.86 00:31:41.423 ======================================================== 00:31:41.423 Total : 309.31 0.15 934602.56 550.64 1047561.36 00:31:41.423 00:31:41.423 [2024-12-05 14:03:12.893264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf19b0 (9): Bad file descriptor 00:31:41.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:41.423 14:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.423 14:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:41.423 14:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2370180 00:31:41.423 14:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2370180 00:31:41.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2370180) - No such process 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2370180 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2370180 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2370180 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:41.991 [2024-12-05 14:03:13.415184] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2370588 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2370588 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:41.991 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:41.991 [2024-12-05 14:03:13.480217] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:42.557 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:42.557 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2370588 00:31:42.557 14:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:43.122 14:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:43.122 14:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2370588 00:31:43.122 14:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:43.700 14:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:43.700 14:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2370588 00:31:43.700 14:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:43.957 14:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:43.957 14:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2370588 00:31:43.957 14:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:44.521 14:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:44.521 14:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2370588 00:31:44.521 14:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:45.086 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:45.086 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2370588 00:31:45.086 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:45.344 Initializing NVMe Controllers 00:31:45.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:45.344 Controller IO queue size 128, less than required. 00:31:45.344 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:45.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:45.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:45.344 Initialization complete. Launching workers. 00:31:45.344 ======================================================== 00:31:45.344 Latency(us) 00:31:45.344 Device Information : IOPS MiB/s Average min max 00:31:45.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005132.22 1000225.51 1043881.40 00:31:45.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004748.81 1000223.76 1011552.51 00:31:45.344 ======================================================== 00:31:45.344 Total : 256.00 0.12 1004940.52 1000223.76 1043881.40 00:31:45.344 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2370588 00:31:45.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2370588) - No such process 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2370588 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:45.604 rmmod nvme_tcp 00:31:45.604 rmmod nvme_fabrics 00:31:45.604 rmmod nvme_keyring 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2370037 ']' 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2370037 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2370037 ']' 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2370037 00:31:45.604 14:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:45.604 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.604 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2370037 00:31:45.604 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:45.604 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:45.604 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2370037' 00:31:45.604 killing process with pid 2370037 00:31:45.604 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2370037 00:31:45.604 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2370037 00:31:45.869 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:45.869 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:45.869 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:45.869 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:45.869 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:45.869 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:45.869 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:45.869 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:45.869 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:45.869 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.869 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.869 14:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.776 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:48.036 00:31:48.036 real 0m12.442s 00:31:48.036 user 0m24.688s 00:31:48.036 sys 0m3.903s 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:48.036 ************************************ 00:31:48.036 END TEST nvmf_delete_subsystem 00:31:48.036 ************************************ 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:48.036 ************************************ 00:31:48.036 START TEST nvmf_host_management 00:31:48.036 ************************************ 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:48.036 * Looking for test storage... 00:31:48.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.036 --rc genhtml_branch_coverage=1 00:31:48.036 --rc genhtml_function_coverage=1 00:31:48.036 --rc genhtml_legend=1 00:31:48.036 --rc geninfo_all_blocks=1 00:31:48.036 --rc geninfo_unexecuted_blocks=1 00:31:48.036 00:31:48.036 ' 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.036 --rc genhtml_branch_coverage=1 00:31:48.036 --rc genhtml_function_coverage=1 00:31:48.036 --rc genhtml_legend=1 00:31:48.036 --rc geninfo_all_blocks=1 00:31:48.036 --rc geninfo_unexecuted_blocks=1 00:31:48.036 00:31:48.036 ' 00:31:48.036 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.036 --rc genhtml_branch_coverage=1 00:31:48.036 --rc genhtml_function_coverage=1 00:31:48.036 --rc genhtml_legend=1 00:31:48.037 --rc geninfo_all_blocks=1 00:31:48.037 --rc geninfo_unexecuted_blocks=1 00:31:48.037 00:31:48.037 ' 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:48.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.037 --rc genhtml_branch_coverage=1 00:31:48.037 --rc genhtml_function_coverage=1 00:31:48.037 --rc genhtml_legend=1 00:31:48.037 --rc geninfo_all_blocks=1 00:31:48.037 --rc geninfo_unexecuted_blocks=1 00:31:48.037 00:31:48.037 ' 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.037 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.038 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.038 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:48.038 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:48.038 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:48.038 14:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:50.577 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:50.577 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:50.577 Found net devices under 0000:09:00.0: cvl_0_0 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:50.577 Found net devices under 0000:09:00.1: cvl_0_1 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:50.577 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:50.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:50.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:31:50.578 00:31:50.578 --- 10.0.0.2 ping statistics --- 00:31:50.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.578 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:50.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:50.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:31:50.578 00:31:50.578 --- 10.0.0.1 ping statistics --- 00:31:50.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.578 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2372927 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2372927 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2372927 ']' 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:50.578 14:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:50.578 [2024-12-05 14:03:21.774164] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:50.578 [2024-12-05 14:03:21.775242] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:31:50.578 [2024-12-05 14:03:21.775307] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:50.578 [2024-12-05 14:03:21.846596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:50.578 [2024-12-05 14:03:21.907640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:50.578 [2024-12-05 14:03:21.907686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:50.578 [2024-12-05 14:03:21.907715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:50.578 [2024-12-05 14:03:21.907726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:50.578 [2024-12-05 14:03:21.907742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:50.578 [2024-12-05 14:03:21.909695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:50.578 [2024-12-05 14:03:21.909751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:50.578 [2024-12-05 14:03:21.909810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:50.578 [2024-12-05 14:03:21.909814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.578 [2024-12-05 14:03:22.000334] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:50.578 [2024-12-05 14:03:22.000592] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:50.578 [2024-12-05 14:03:22.000880] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:50.578 [2024-12-05 14:03:22.001578] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:50.578 [2024-12-05 14:03:22.001820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:50.578 [2024-12-05 14:03:22.054528] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.578 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:50.838 Malloc0 00:31:50.838 [2024-12-05 14:03:22.130746] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2373091 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2373091 /var/tmp/bdevperf.sock 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2373091 ']' 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:50.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:50.838 { 00:31:50.838 "params": { 00:31:50.838 "name": "Nvme$subsystem", 00:31:50.838 "trtype": "$TEST_TRANSPORT", 00:31:50.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:50.838 "adrfam": "ipv4", 00:31:50.838 "trsvcid": "$NVMF_PORT", 00:31:50.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:50.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:50.838 "hdgst": ${hdgst:-false}, 00:31:50.838 "ddgst": ${ddgst:-false} 00:31:50.838 }, 00:31:50.838 "method": "bdev_nvme_attach_controller" 00:31:50.838 } 00:31:50.838 EOF 00:31:50.838 )") 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:50.838 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:50.838 "params": { 00:31:50.838 "name": "Nvme0", 00:31:50.838 "trtype": "tcp", 00:31:50.838 "traddr": "10.0.0.2", 00:31:50.838 "adrfam": "ipv4", 00:31:50.838 "trsvcid": "4420", 00:31:50.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:50.838 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:50.838 "hdgst": false, 00:31:50.838 "ddgst": false 00:31:50.838 }, 00:31:50.838 "method": "bdev_nvme_attach_controller" 00:31:50.838 }' 00:31:50.838 [2024-12-05 14:03:22.213301] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:31:50.839 [2024-12-05 14:03:22.213375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373091 ] 00:31:50.839 [2024-12-05 14:03:22.282382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.839 [2024-12-05 14:03:22.342089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.407 Running I/O for 10 seconds... 00:31:51.407 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:31:51.408 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:31:51.667 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:31:51.667 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:51.667 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:51.667 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:51.667 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.667 14:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:51.667 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.667 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:31:51.667 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:31:51.667 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:51.667 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:51.667 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:51.667 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:51.667 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.667 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:51.667 [2024-12-05 14:03:23.030556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11f80 is same with the state(6) to be set 00:31:51.667 [2024-12-05 14:03:23.030611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11f80 is same with the state(6) to be set 00:31:51.667 [2024-12-05 14:03:23.030626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11f80 is same with the state(6) to be set 00:31:51.667 [2024-12-05 14:03:23.033704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.667 [2024-12-05 14:03:23.033757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.667 [2024-12-05 14:03:23.033776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.667 [2024-12-05 14:03:23.033790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.667 [2024-12-05 14:03:23.033805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.667 [2024-12-05 14:03:23.033819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.667 [2024-12-05 14:03:23.033833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.667 [2024-12-05 14:03:23.033847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.667 [2024-12-05 14:03:23.033861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbca50 is same with the state(6) to be set 00:31:51.667 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.667 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:51.667 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.667 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:51.667 [2024-12-05 14:03:23.038183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.667 [2024-12-05 14:03:23.038211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.038972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.038986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.039001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.039015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.039030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.039045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.039060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.039074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.039090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.039103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.039119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.039133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.039152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.668 [2024-12-05 14:03:23.039166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-12-05 14:03:23.039181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.039985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.039999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.040015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.040029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.040044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.040058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.669 [2024-12-05 14:03:23.040073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.669 [2024-12-05 14:03:23.040095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.670 [2024-12-05 14:03:23.040110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.670 [2024-12-05 14:03:23.040124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.670 [2024-12-05 14:03:23.040139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.670 [2024-12-05 14:03:23.040153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.670 [2024-12-05 14:03:23.040168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.670 [2024-12-05 14:03:23.040182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.670 [2024-12-05 14:03:23.041374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:51.670 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.670 14:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:51.670 task offset: 73728 on job bdev=Nvme0n1 fails 00:31:51.670 00:31:51.670 Latency(us) 00:31:51.670 [2024-12-05T13:03:23.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.670 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:51.670 Job: Nvme0n1 ended in about 0.40 seconds with error 00:31:51.670 Verification LBA range: start 0x0 length 0x400 00:31:51.670 Nvme0n1 : 0.40 1447.35 90.46 160.82 0.00 38669.40 2439.40 40195.41 00:31:51.670 [2024-12-05T13:03:23.196Z] =================================================================================================================== 00:31:51.670 [2024-12-05T13:03:23.196Z] Total : 1447.35 90.46 160.82 0.00 38669.40 2439.40 40195.41 00:31:51.670 [2024-12-05 14:03:23.043265] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:51.670 [2024-12-05 14:03:23.043295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbca50 (9): Bad file descriptor 00:31:51.670 [2024-12-05 14:03:23.135551] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:52.600 14:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2373091 00:31:52.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2373091) - No such process 00:31:52.600 14:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:52.600 14:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:52.600 14:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:52.600 14:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:52.600 14:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:52.600 14:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:52.600 14:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:52.600 14:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:52.600 { 00:31:52.600 "params": { 00:31:52.600 "name": "Nvme$subsystem", 00:31:52.600 "trtype": "$TEST_TRANSPORT", 00:31:52.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.600 "adrfam": "ipv4", 00:31:52.600 "trsvcid": "$NVMF_PORT", 00:31:52.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.600 "hdgst": ${hdgst:-false}, 00:31:52.600 "ddgst": ${ddgst:-false} 00:31:52.600 }, 00:31:52.600 "method": "bdev_nvme_attach_controller" 00:31:52.600 } 00:31:52.600 EOF 00:31:52.600 )") 00:31:52.600 14:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:52.600 14:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:52.600 14:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:52.600 14:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:52.600 "params": { 00:31:52.600 "name": "Nvme0", 00:31:52.600 "trtype": "tcp", 00:31:52.600 "traddr": "10.0.0.2", 00:31:52.600 "adrfam": "ipv4", 00:31:52.600 "trsvcid": "4420", 00:31:52.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:52.600 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:52.600 "hdgst": false, 00:31:52.600 "ddgst": false 00:31:52.600 }, 00:31:52.600 "method": "bdev_nvme_attach_controller" 00:31:52.600 }' 00:31:52.600 [2024-12-05 14:03:24.094184] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:31:52.600 [2024-12-05 14:03:24.094260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373253 ] 00:31:52.858 [2024-12-05 14:03:24.166619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.858 [2024-12-05 14:03:24.226005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.117 Running I/O for 1 seconds... 00:31:54.491 1646.00 IOPS, 102.88 MiB/s 00:31:54.491 Latency(us) 00:31:54.491 [2024-12-05T13:03:26.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.491 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.491 Verification LBA range: start 0x0 length 0x400 00:31:54.491 Nvme0n1 : 1.03 1681.25 105.08 0.00 0.00 37453.46 5776.88 33399.09 00:31:54.491 [2024-12-05T13:03:26.017Z] =================================================================================================================== 00:31:54.491 [2024-12-05T13:03:26.017Z] Total : 1681.25 105.08 0.00 0.00 37453.46 5776.88 33399.09 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:54.491 rmmod nvme_tcp 00:31:54.491 rmmod nvme_fabrics 00:31:54.491 rmmod nvme_keyring 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2372927 ']' 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2372927 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2372927 ']' 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2372927 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2372927 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2372927' 00:31:54.491 killing process with pid 2372927 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2372927 00:31:54.491 14:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2372927 00:31:54.748 [2024-12-05 14:03:26.129278] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:54.748 14:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:54.748 14:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:54.748 14:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:54.748 14:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:54.748 14:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:54.748 14:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:54.748 14:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:54.748 14:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:54.748 14:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:54.748 14:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.748 14:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.748 14:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:57.283 00:31:57.283 real 0m8.860s 00:31:57.283 user 0m18.117s 00:31:57.283 sys 0m3.691s 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:57.283 ************************************ 00:31:57.283 END TEST nvmf_host_management 00:31:57.283 ************************************ 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:57.283 ************************************ 00:31:57.283 START TEST nvmf_lvol 00:31:57.283 ************************************ 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:57.283 * Looking for test storage... 00:31:57.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:57.283 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:57.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.284 --rc genhtml_branch_coverage=1 00:31:57.284 --rc genhtml_function_coverage=1 00:31:57.284 --rc genhtml_legend=1 00:31:57.284 --rc geninfo_all_blocks=1 00:31:57.284 --rc geninfo_unexecuted_blocks=1 00:31:57.284 00:31:57.284 ' 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:57.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.284 --rc genhtml_branch_coverage=1 00:31:57.284 --rc genhtml_function_coverage=1 00:31:57.284 --rc genhtml_legend=1 00:31:57.284 --rc geninfo_all_blocks=1 00:31:57.284 --rc geninfo_unexecuted_blocks=1 00:31:57.284 00:31:57.284 ' 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:57.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.284 --rc genhtml_branch_coverage=1 00:31:57.284 --rc genhtml_function_coverage=1 00:31:57.284 --rc genhtml_legend=1 00:31:57.284 --rc geninfo_all_blocks=1 00:31:57.284 --rc geninfo_unexecuted_blocks=1 00:31:57.284 00:31:57.284 ' 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:57.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.284 --rc genhtml_branch_coverage=1 00:31:57.284 --rc genhtml_function_coverage=1 00:31:57.284 --rc genhtml_legend=1 00:31:57.284 --rc geninfo_all_blocks=1 00:31:57.284 --rc geninfo_unexecuted_blocks=1 00:31:57.284 00:31:57.284 ' 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:57.284 14:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:59.331 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:59.331 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:59.331 Found net devices under 0000:09:00.0: cvl_0_0 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:59.331 Found net devices under 0000:09:00.1: cvl_0_1 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.331 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:31:59.331 00:31:59.331 --- 10.0.0.2 ping statistics --- 00:31:59.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.332 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:31:59.332 00:31:59.332 --- 10.0.0.1 ping statistics --- 00:31:59.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.332 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2375458 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2375458 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2375458 ']' 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.332 14:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:59.332 [2024-12-05 14:03:30.799249] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:59.332 [2024-12-05 14:03:30.800282] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:31:59.332 [2024-12-05 14:03:30.800340] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.595 [2024-12-05 14:03:30.869961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:59.595 [2024-12-05 14:03:30.926026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.595 [2024-12-05 14:03:30.926068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.595 [2024-12-05 14:03:30.926097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.595 [2024-12-05 14:03:30.926109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.595 [2024-12-05 14:03:30.926119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.595 [2024-12-05 14:03:30.927673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.595 [2024-12-05 14:03:30.927730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.595 [2024-12-05 14:03:30.927735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.595 [2024-12-05 14:03:31.021140] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:59.595 [2024-12-05 14:03:31.021399] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:59.595 [2024-12-05 14:03:31.021472] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:59.595 [2024-12-05 14:03:31.021693] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:59.595 14:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:59.595 14:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:59.595 14:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:59.595 14:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:59.595 14:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:59.595 14:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.595 14:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:59.852 [2024-12-05 14:03:31.324386] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.852 14:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:00.420 14:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:00.420 14:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:00.420 14:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:00.420 14:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:00.989 14:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:01.249 14:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9e5e039b-8079-4b56-8e06-49f5ee23d8aa 00:32:01.249 14:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9e5e039b-8079-4b56-8e06-49f5ee23d8aa lvol 20 00:32:01.508 14:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bcb85cc8-6eb7-40d2-8ed4-4519f80cc051 00:32:01.508 14:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:01.766 14:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bcb85cc8-6eb7-40d2-8ed4-4519f80cc051 00:32:02.025 14:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:02.283 [2024-12-05 14:03:33.600570] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.283 14:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:02.541 14:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2375878 00:32:02.541 14:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:02.541 14:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:03.476 14:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bcb85cc8-6eb7-40d2-8ed4-4519f80cc051 MY_SNAPSHOT 00:32:03.735 14:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bddffb47-e747-4bd6-85d6-a456b6b887df 00:32:03.735 14:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bcb85cc8-6eb7-40d2-8ed4-4519f80cc051 30 00:32:04.301 14:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone bddffb47-e747-4bd6-85d6-a456b6b887df MY_CLONE 00:32:04.301 14:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=dd483e09-40d5-47b8-8b44-2430b692155c 00:32:04.301 14:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate dd483e09-40d5-47b8-8b44-2430b692155c 00:32:05.237 14:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2375878 00:32:13.357 Initializing NVMe Controllers 00:32:13.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:13.357 Controller IO queue size 128, less than required. 00:32:13.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:13.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:13.357 Initialization complete. Launching workers. 00:32:13.357 ======================================================== 00:32:13.357 Latency(us) 00:32:13.357 Device Information : IOPS MiB/s Average min max 00:32:13.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10658.30 41.63 12015.87 6185.49 66818.20 00:32:13.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10482.20 40.95 12210.94 3372.56 73197.30 00:32:13.357 ======================================================== 00:32:13.357 Total : 21140.50 82.58 12112.60 3372.56 73197.30 00:32:13.357 00:32:13.357 14:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:13.357 14:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bcb85cc8-6eb7-40d2-8ed4-4519f80cc051 00:32:13.357 14:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9e5e039b-8079-4b56-8e06-49f5ee23d8aa 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:13.923 rmmod nvme_tcp 00:32:13.923 rmmod nvme_fabrics 00:32:13.923 rmmod nvme_keyring 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2375458 ']' 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2375458 00:32:13.923 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2375458 ']' 00:32:13.924 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2375458 00:32:13.924 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:13.924 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:13.924 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2375458 00:32:13.924 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:13.924 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:13.924 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2375458' 00:32:13.924 killing process with pid 2375458 00:32:13.924 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2375458 00:32:13.924 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2375458 00:32:14.182 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:14.182 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:14.182 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:14.182 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:14.182 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:14.182 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:14.182 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:14.182 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:14.182 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:14.182 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.182 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:14.182 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.084 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:16.084 00:32:16.084 real 0m19.291s 00:32:16.084 user 0m56.344s 00:32:16.084 sys 0m7.784s 00:32:16.084 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:16.084 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:16.084 ************************************ 00:32:16.084 END TEST nvmf_lvol 00:32:16.084 ************************************ 00:32:16.084 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:16.084 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:16.084 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:16.084 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:16.084 ************************************ 00:32:16.084 START TEST nvmf_lvs_grow 00:32:16.084 ************************************ 00:32:16.084 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:16.343 * Looking for test storage... 00:32:16.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:16.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.343 --rc genhtml_branch_coverage=1 00:32:16.343 --rc genhtml_function_coverage=1 00:32:16.343 --rc genhtml_legend=1 00:32:16.343 --rc geninfo_all_blocks=1 00:32:16.343 --rc geninfo_unexecuted_blocks=1 00:32:16.343 00:32:16.343 ' 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:16.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.343 --rc genhtml_branch_coverage=1 00:32:16.343 --rc genhtml_function_coverage=1 00:32:16.343 --rc genhtml_legend=1 00:32:16.343 --rc geninfo_all_blocks=1 00:32:16.343 --rc geninfo_unexecuted_blocks=1 00:32:16.343 00:32:16.343 ' 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:16.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.343 --rc genhtml_branch_coverage=1 00:32:16.343 --rc genhtml_function_coverage=1 00:32:16.343 --rc genhtml_legend=1 00:32:16.343 --rc geninfo_all_blocks=1 00:32:16.343 --rc geninfo_unexecuted_blocks=1 00:32:16.343 00:32:16.343 ' 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:16.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.343 --rc genhtml_branch_coverage=1 00:32:16.343 --rc genhtml_function_coverage=1 00:32:16.343 --rc genhtml_legend=1 00:32:16.343 --rc geninfo_all_blocks=1 00:32:16.343 --rc geninfo_unexecuted_blocks=1 00:32:16.343 00:32:16.343 ' 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:16.343 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:16.344 14:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:18.874 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:18.874 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:18.874 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:18.874 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:18.874 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:18.875 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:18.875 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:18.875 Found net devices under 0000:09:00.0: cvl_0_0 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:18.875 Found net devices under 0000:09:00.1: cvl_0_1 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:18.875 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:18.876 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:18.876 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:18.876 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:18.876 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:18.876 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:18.876 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:18.876 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:18.876 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:18.876 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:18.876 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:18.876 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:18.876 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:18.876 14:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:18.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:18.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:32:18.876 00:32:18.876 --- 10.0.0.2 ping statistics --- 00:32:18.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.876 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:18.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:18.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:32:18.876 00:32:18.876 --- 10.0.0.1 ping statistics --- 00:32:18.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.876 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2379142 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2379142 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2379142 ']' 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:18.876 [2024-12-05 14:03:50.116327] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:18.876 [2024-12-05 14:03:50.117444] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:32:18.876 [2024-12-05 14:03:50.117511] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.876 [2024-12-05 14:03:50.192949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.876 [2024-12-05 14:03:50.252261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:18.876 [2024-12-05 14:03:50.252323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:18.876 [2024-12-05 14:03:50.252352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:18.876 [2024-12-05 14:03:50.252363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:18.876 [2024-12-05 14:03:50.252373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:18.876 [2024-12-05 14:03:50.253046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.876 [2024-12-05 14:03:50.347174] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:18.876 [2024-12-05 14:03:50.347511] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.876 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:19.136 [2024-12-05 14:03:50.649661] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:19.395 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:19.395 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:19.395 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:19.395 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:19.395 ************************************ 00:32:19.395 START TEST lvs_grow_clean 00:32:19.395 ************************************ 00:32:19.395 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:19.395 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:19.395 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:19.395 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:19.395 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:19.395 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:19.395 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:19.395 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:19.395 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:19.395 14:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:19.655 14:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:19.655 14:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:19.914 14:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7def1d54-9844-49e5-af8d-b73e830ae4ff 00:32:19.914 14:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7def1d54-9844-49e5-af8d-b73e830ae4ff 00:32:19.914 14:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:20.175 14:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:20.175 14:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:20.175 14:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7def1d54-9844-49e5-af8d-b73e830ae4ff lvol 150 00:32:20.434 14:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d2663320-6c49-4ac4-8404-7e7c3c1307c1 00:32:20.434 14:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:20.434 14:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:20.691 [2024-12-05 14:03:52.101543] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:20.691 [2024-12-05 14:03:52.101632] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:20.691 true 00:32:20.691 14:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7def1d54-9844-49e5-af8d-b73e830ae4ff 00:32:20.691 14:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:20.948 14:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:20.948 14:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:21.207 14:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d2663320-6c49-4ac4-8404-7e7c3c1307c1 00:32:21.467 14:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:21.728 [2024-12-05 14:03:53.193874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.728 14:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:21.987 14:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2379578 00:32:21.987 14:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:21.987 14:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:21.987 14:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2379578 /var/tmp/bdevperf.sock 00:32:21.987 14:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2379578 ']' 00:32:21.987 14:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:21.987 14:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.987 14:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:21.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:21.987 14:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.987 14:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:22.246 [2024-12-05 14:03:53.531718] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:32:22.246 [2024-12-05 14:03:53.531809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379578 ] 00:32:22.246 [2024-12-05 14:03:53.599205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.246 [2024-12-05 14:03:53.659717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.506 14:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:22.506 14:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:22.506 14:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:22.765 Nvme0n1 00:32:22.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:23.025 [ 00:32:23.025 { 00:32:23.025 "name": "Nvme0n1", 00:32:23.025 "aliases": [ 00:32:23.025 "d2663320-6c49-4ac4-8404-7e7c3c1307c1" 00:32:23.025 ], 00:32:23.025 "product_name": "NVMe disk", 00:32:23.025 "block_size": 4096, 00:32:23.025 "num_blocks": 38912, 00:32:23.025 "uuid": "d2663320-6c49-4ac4-8404-7e7c3c1307c1", 00:32:23.025 "numa_id": 0, 00:32:23.025 "assigned_rate_limits": { 00:32:23.025 "rw_ios_per_sec": 0, 00:32:23.025 "rw_mbytes_per_sec": 0, 00:32:23.025 "r_mbytes_per_sec": 0, 00:32:23.025 "w_mbytes_per_sec": 0 00:32:23.025 }, 00:32:23.025 "claimed": false, 00:32:23.025 "zoned": false, 00:32:23.025 "supported_io_types": { 00:32:23.025 "read": true, 00:32:23.025 "write": true, 00:32:23.025 "unmap": true, 00:32:23.025 "flush": true, 00:32:23.025 "reset": true, 00:32:23.025 "nvme_admin": true, 00:32:23.025 "nvme_io": true, 00:32:23.025 "nvme_io_md": false, 00:32:23.025 "write_zeroes": true, 00:32:23.025 "zcopy": false, 00:32:23.025 "get_zone_info": false, 00:32:23.025 "zone_management": false, 00:32:23.025 "zone_append": false, 00:32:23.025 "compare": true, 00:32:23.025 "compare_and_write": true, 00:32:23.025 "abort": true, 00:32:23.025 "seek_hole": false, 00:32:23.025 "seek_data": false, 00:32:23.025 "copy": true, 00:32:23.025 "nvme_iov_md": false 00:32:23.025 }, 00:32:23.025 "memory_domains": [ 00:32:23.025 { 00:32:23.025 "dma_device_id": "system", 00:32:23.025 "dma_device_type": 1 00:32:23.025 } 00:32:23.025 ], 00:32:23.025 "driver_specific": { 00:32:23.025 "nvme": [ 00:32:23.025 { 00:32:23.025 "trid": { 00:32:23.025 "trtype": "TCP", 00:32:23.025 "adrfam": "IPv4", 00:32:23.025 "traddr": "10.0.0.2", 00:32:23.025 "trsvcid": "4420", 00:32:23.025 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:23.025 }, 00:32:23.025 "ctrlr_data": { 00:32:23.025 "cntlid": 1, 00:32:23.025 "vendor_id": "0x8086", 00:32:23.025 "model_number": "SPDK bdev Controller", 00:32:23.025 "serial_number": "SPDK0", 00:32:23.025 "firmware_revision": "25.01", 00:32:23.025 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.025 "oacs": { 00:32:23.025 "security": 0, 00:32:23.025 "format": 0, 00:32:23.025 "firmware": 0, 00:32:23.025 "ns_manage": 0 00:32:23.025 }, 00:32:23.025 "multi_ctrlr": true, 00:32:23.025 "ana_reporting": false 00:32:23.025 }, 00:32:23.025 "vs": { 00:32:23.025 "nvme_version": "1.3" 00:32:23.025 }, 00:32:23.025 "ns_data": { 00:32:23.025 "id": 1, 00:32:23.025 "can_share": true 00:32:23.025 } 00:32:23.025 } 00:32:23.025 ], 00:32:23.025 "mp_policy": "active_passive" 00:32:23.025 } 00:32:23.025 } 00:32:23.025 ] 00:32:23.025 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2379706 00:32:23.025 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:23.025 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:23.025 Running I/O for 10 seconds... 00:32:23.963 Latency(us) 00:32:23.963 [2024-12-05T13:03:55.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.963 Nvme0n1 : 1.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:32:23.963 [2024-12-05T13:03:55.489Z] =================================================================================================================== 00:32:23.963 [2024-12-05T13:03:55.489Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:32:23.963 00:32:24.898 14:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7def1d54-9844-49e5-af8d-b73e830ae4ff 00:32:25.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.156 Nvme0n1 : 2.00 15049.50 58.79 0.00 0.00 0.00 0.00 0.00 00:32:25.156 [2024-12-05T13:03:56.682Z] =================================================================================================================== 00:32:25.156 [2024-12-05T13:03:56.682Z] Total : 15049.50 58.79 0.00 0.00 0.00 0.00 0.00 00:32:25.156 00:32:25.156 true 00:32:25.156 14:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7def1d54-9844-49e5-af8d-b73e830ae4ff 00:32:25.156 14:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:25.416 14:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:25.416 14:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:25.416 14:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2379706 00:32:25.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.984 Nvme0n1 : 3.00 15155.33 59.20 0.00 0.00 0.00 0.00 0.00 00:32:25.984 [2024-12-05T13:03:57.510Z] =================================================================================================================== 00:32:25.984 [2024-12-05T13:03:57.510Z] Total : 15155.33 59.20 0.00 0.00 0.00 0.00 0.00 00:32:25.984 00:32:27.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.362 Nvme0n1 : 4.00 15271.75 59.66 0.00 0.00 0.00 0.00 0.00 00:32:27.362 [2024-12-05T13:03:58.888Z] =================================================================================================================== 00:32:27.362 [2024-12-05T13:03:58.888Z] Total : 15271.75 59.66 0.00 0.00 0.00 0.00 0.00 00:32:27.362 00:32:28.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:28.299 Nvme0n1 : 5.00 15341.60 59.93 0.00 0.00 0.00 0.00 0.00 00:32:28.299 [2024-12-05T13:03:59.825Z] =================================================================================================================== 00:32:28.299 [2024-12-05T13:03:59.825Z] Total : 15341.60 59.93 0.00 0.00 0.00 0.00 0.00 00:32:28.299 00:32:29.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:29.238 Nvme0n1 : 6.00 15388.17 60.11 0.00 0.00 0.00 0.00 0.00 00:32:29.238 [2024-12-05T13:04:00.764Z] =================================================================================================================== 00:32:29.238 [2024-12-05T13:04:00.764Z] Total : 15388.17 60.11 0.00 0.00 0.00 0.00 0.00 00:32:29.238 00:32:30.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:30.177 Nvme0n1 : 7.00 15439.57 60.31 0.00 0.00 0.00 0.00 0.00 00:32:30.177 [2024-12-05T13:04:01.703Z] =================================================================================================================== 00:32:30.177 [2024-12-05T13:04:01.703Z] Total : 15439.57 60.31 0.00 0.00 0.00 0.00 0.00 00:32:30.177 00:32:31.110 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:31.110 Nvme0n1 : 8.00 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:32:31.110 [2024-12-05T13:04:02.636Z] =================================================================================================================== 00:32:31.110 [2024-12-05T13:04:02.636Z] Total : 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:32:31.110 00:32:32.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:32.047 Nvme0n1 : 9.00 15529.33 60.66 0.00 0.00 0.00 0.00 0.00 00:32:32.047 [2024-12-05T13:04:03.573Z] =================================================================================================================== 00:32:32.047 [2024-12-05T13:04:03.573Z] Total : 15529.33 60.66 0.00 0.00 0.00 0.00 0.00 00:32:32.047 00:32:32.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:32.982 Nvme0n1 : 10.00 15570.20 60.82 0.00 0.00 0.00 0.00 0.00 00:32:32.982 [2024-12-05T13:04:04.508Z] =================================================================================================================== 00:32:32.982 [2024-12-05T13:04:04.508Z] Total : 15570.20 60.82 0.00 0.00 0.00 0.00 0.00 00:32:32.982 00:32:32.982 00:32:32.982 Latency(us) 00:32:32.982 [2024-12-05T13:04:04.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:32.982 Nvme0n1 : 10.01 15568.79 60.82 0.00 0.00 8215.69 4466.16 18544.26 00:32:32.982 [2024-12-05T13:04:04.508Z] =================================================================================================================== 00:32:32.982 [2024-12-05T13:04:04.508Z] Total : 15568.79 60.82 0.00 0.00 8215.69 4466.16 18544.26 00:32:32.982 { 00:32:32.982 "results": [ 00:32:32.982 { 00:32:32.982 "job": "Nvme0n1", 00:32:32.982 "core_mask": "0x2", 00:32:32.982 "workload": "randwrite", 00:32:32.982 "status": "finished", 00:32:32.982 "queue_depth": 128, 00:32:32.982 "io_size": 4096, 00:32:32.982 "runtime": 10.005081, 00:32:32.982 "iops": 15568.789498056038, 00:32:32.982 "mibps": 60.8155839767814, 00:32:32.982 "io_failed": 0, 00:32:32.982 "io_timeout": 0, 00:32:32.982 "avg_latency_us": 8215.685672898435, 00:32:32.982 "min_latency_us": 4466.157037037037, 00:32:32.982 "max_latency_us": 18544.26074074074 00:32:32.982 } 00:32:32.982 ], 00:32:32.982 "core_count": 1 00:32:32.982 } 00:32:33.241 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2379578 00:32:33.241 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2379578 ']' 00:32:33.241 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2379578 00:32:33.241 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:33.241 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:33.241 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2379578 00:32:33.241 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:33.241 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:33.241 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2379578' 00:32:33.241 killing process with pid 2379578 00:32:33.241 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2379578 00:32:33.241 Received shutdown signal, test time was about 10.000000 seconds 00:32:33.241 00:32:33.241 Latency(us) 00:32:33.241 [2024-12-05T13:04:04.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.241 [2024-12-05T13:04:04.767Z] =================================================================================================================== 00:32:33.241 [2024-12-05T13:04:04.767Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:33.241 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2379578 00:32:33.241 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:33.500 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:34.085 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7def1d54-9844-49e5-af8d-b73e830ae4ff 00:32:34.085 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:34.085 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:34.085 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:34.085 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:34.343 [2024-12-05 14:04:05.841590] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:34.602 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7def1d54-9844-49e5-af8d-b73e830ae4ff 00:32:34.602 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:34.602 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7def1d54-9844-49e5-af8d-b73e830ae4ff 00:32:34.602 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:34.602 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:34.602 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:34.602 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:34.602 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:34.602 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:34.602 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:34.602 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:34.602 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7def1d54-9844-49e5-af8d-b73e830ae4ff 00:32:34.860 request: 00:32:34.860 { 00:32:34.860 "uuid": "7def1d54-9844-49e5-af8d-b73e830ae4ff", 00:32:34.860 "method": "bdev_lvol_get_lvstores", 00:32:34.860 "req_id": 1 00:32:34.860 } 00:32:34.860 Got JSON-RPC error response 00:32:34.860 response: 00:32:34.860 { 00:32:34.860 "code": -19, 00:32:34.860 "message": "No such device" 00:32:34.860 } 00:32:34.860 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:34.860 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:34.860 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:34.860 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:34.860 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:35.118 aio_bdev 00:32:35.118 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d2663320-6c49-4ac4-8404-7e7c3c1307c1 00:32:35.118 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=d2663320-6c49-4ac4-8404-7e7c3c1307c1 00:32:35.118 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:35.118 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:35.118 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:35.118 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:35.118 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:35.376 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d2663320-6c49-4ac4-8404-7e7c3c1307c1 -t 2000 00:32:35.633 [ 00:32:35.633 { 00:32:35.633 "name": "d2663320-6c49-4ac4-8404-7e7c3c1307c1", 00:32:35.633 "aliases": [ 00:32:35.634 "lvs/lvol" 00:32:35.634 ], 00:32:35.634 "product_name": "Logical Volume", 00:32:35.634 "block_size": 4096, 00:32:35.634 "num_blocks": 38912, 00:32:35.634 "uuid": "d2663320-6c49-4ac4-8404-7e7c3c1307c1", 00:32:35.634 "assigned_rate_limits": { 00:32:35.634 "rw_ios_per_sec": 0, 00:32:35.634 "rw_mbytes_per_sec": 0, 00:32:35.634 "r_mbytes_per_sec": 0, 00:32:35.634 "w_mbytes_per_sec": 0 00:32:35.634 }, 00:32:35.634 "claimed": false, 00:32:35.634 "zoned": false, 00:32:35.634 "supported_io_types": { 00:32:35.634 "read": true, 00:32:35.634 "write": true, 00:32:35.634 "unmap": true, 00:32:35.634 "flush": false, 00:32:35.634 "reset": true, 00:32:35.634 "nvme_admin": false, 00:32:35.634 "nvme_io": false, 00:32:35.634 "nvme_io_md": false, 00:32:35.634 "write_zeroes": true, 00:32:35.634 "zcopy": false, 00:32:35.634 "get_zone_info": false, 00:32:35.634 "zone_management": false, 00:32:35.634 "zone_append": false, 00:32:35.634 "compare": false, 00:32:35.634 "compare_and_write": false, 00:32:35.634 "abort": false, 00:32:35.634 "seek_hole": true, 00:32:35.634 "seek_data": true, 00:32:35.634 "copy": false, 00:32:35.634 "nvme_iov_md": false 00:32:35.634 }, 00:32:35.634 "driver_specific": { 00:32:35.634 "lvol": { 00:32:35.634 "lvol_store_uuid": "7def1d54-9844-49e5-af8d-b73e830ae4ff", 00:32:35.634 "base_bdev": "aio_bdev", 00:32:35.634 "thin_provision": false, 00:32:35.634 "num_allocated_clusters": 38, 00:32:35.634 "snapshot": false, 00:32:35.634 "clone": false, 00:32:35.634 "esnap_clone": false 00:32:35.634 } 00:32:35.634 } 00:32:35.634 } 00:32:35.634 ] 00:32:35.634 14:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:35.634 14:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7def1d54-9844-49e5-af8d-b73e830ae4ff 00:32:35.634 14:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:35.890 14:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:35.890 14:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7def1d54-9844-49e5-af8d-b73e830ae4ff 00:32:35.890 14:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:36.148 14:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:36.148 14:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d2663320-6c49-4ac4-8404-7e7c3c1307c1 00:32:36.406 14:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7def1d54-9844-49e5-af8d-b73e830ae4ff 00:32:36.663 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:36.920 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:36.920 00:32:36.920 real 0m17.720s 00:32:36.920 user 0m17.285s 00:32:36.920 sys 0m1.808s 00:32:36.920 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:36.920 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:36.920 ************************************ 00:32:36.920 END TEST lvs_grow_clean 00:32:36.920 ************************************ 00:32:37.177 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:37.177 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:37.177 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:37.177 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:37.177 ************************************ 00:32:37.177 START TEST lvs_grow_dirty 00:32:37.177 ************************************ 00:32:37.177 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:37.177 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:37.177 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:37.177 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:37.177 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:37.177 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:37.177 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:37.177 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:37.177 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:37.177 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:37.435 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:37.435 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:37.693 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6070389b-18b7-4f80-8d61-201651c32f6b 00:32:37.693 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6070389b-18b7-4f80-8d61-201651c32f6b 00:32:37.693 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:37.951 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:37.951 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:37.951 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6070389b-18b7-4f80-8d61-201651c32f6b lvol 150 00:32:38.209 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a01e7f02-4376-41d7-aa75-d79475b7ecab 00:32:38.209 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:38.210 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:38.468 [2024-12-05 14:04:09.849528] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:38.468 [2024-12-05 14:04:09.849629] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:38.468 true 00:32:38.468 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6070389b-18b7-4f80-8d61-201651c32f6b 00:32:38.468 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:38.726 14:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:38.726 14:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:38.984 14:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a01e7f02-4376-41d7-aa75-d79475b7ecab 00:32:39.242 14:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:39.501 [2024-12-05 14:04:10.973823] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.501 14:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:39.759 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2381730 00:32:39.759 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:39.759 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:39.759 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2381730 /var/tmp/bdevperf.sock 00:32:39.759 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2381730 ']' 00:32:39.759 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:39.759 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.759 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:39.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:39.759 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.759 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:40.018 [2024-12-05 14:04:11.304908] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:32:40.018 [2024-12-05 14:04:11.304994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381730 ] 00:32:40.018 [2024-12-05 14:04:11.372010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.018 [2024-12-05 14:04:11.433151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.276 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:40.276 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:40.276 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:40.535 Nvme0n1 00:32:40.535 14:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:41.103 [ 00:32:41.103 { 00:32:41.103 "name": "Nvme0n1", 00:32:41.103 "aliases": [ 00:32:41.103 "a01e7f02-4376-41d7-aa75-d79475b7ecab" 00:32:41.103 ], 00:32:41.103 "product_name": "NVMe disk", 00:32:41.103 "block_size": 4096, 00:32:41.103 "num_blocks": 38912, 00:32:41.103 "uuid": "a01e7f02-4376-41d7-aa75-d79475b7ecab", 00:32:41.103 "numa_id": 0, 00:32:41.103 "assigned_rate_limits": { 00:32:41.103 "rw_ios_per_sec": 0, 00:32:41.103 "rw_mbytes_per_sec": 0, 00:32:41.103 "r_mbytes_per_sec": 0, 00:32:41.103 "w_mbytes_per_sec": 0 00:32:41.103 }, 00:32:41.103 "claimed": false, 00:32:41.103 "zoned": false, 00:32:41.103 "supported_io_types": { 00:32:41.103 "read": true, 00:32:41.103 "write": true, 00:32:41.103 "unmap": true, 00:32:41.103 "flush": true, 00:32:41.103 "reset": true, 00:32:41.103 "nvme_admin": true, 00:32:41.103 "nvme_io": true, 00:32:41.103 "nvme_io_md": false, 00:32:41.103 "write_zeroes": true, 00:32:41.103 "zcopy": false, 00:32:41.103 "get_zone_info": false, 00:32:41.103 "zone_management": false, 00:32:41.103 "zone_append": false, 00:32:41.103 "compare": true, 00:32:41.103 "compare_and_write": true, 00:32:41.103 "abort": true, 00:32:41.103 "seek_hole": false, 00:32:41.103 "seek_data": false, 00:32:41.103 "copy": true, 00:32:41.103 "nvme_iov_md": false 00:32:41.103 }, 00:32:41.103 "memory_domains": [ 00:32:41.103 { 00:32:41.103 "dma_device_id": "system", 00:32:41.103 "dma_device_type": 1 00:32:41.103 } 00:32:41.103 ], 00:32:41.103 "driver_specific": { 00:32:41.103 "nvme": [ 00:32:41.103 { 00:32:41.103 "trid": { 00:32:41.103 "trtype": "TCP", 00:32:41.103 "adrfam": "IPv4", 00:32:41.103 "traddr": "10.0.0.2", 00:32:41.103 "trsvcid": "4420", 00:32:41.103 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:41.103 }, 00:32:41.103 "ctrlr_data": { 00:32:41.103 "cntlid": 1, 00:32:41.103 "vendor_id": "0x8086", 00:32:41.103 "model_number": "SPDK bdev Controller", 00:32:41.103 "serial_number": "SPDK0", 00:32:41.103 "firmware_revision": "25.01", 00:32:41.103 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:41.103 "oacs": { 00:32:41.103 "security": 0, 00:32:41.103 "format": 0, 00:32:41.103 "firmware": 0, 00:32:41.103 "ns_manage": 0 00:32:41.103 }, 00:32:41.103 "multi_ctrlr": true, 00:32:41.103 "ana_reporting": false 00:32:41.103 }, 00:32:41.103 "vs": { 00:32:41.103 "nvme_version": "1.3" 00:32:41.103 }, 00:32:41.103 "ns_data": { 00:32:41.103 "id": 1, 00:32:41.103 "can_share": true 00:32:41.103 } 00:32:41.103 } 00:32:41.103 ], 00:32:41.103 "mp_policy": "active_passive" 00:32:41.103 } 00:32:41.103 } 00:32:41.103 ] 00:32:41.103 14:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2381865 00:32:41.103 14:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:41.103 14:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:41.103 Running I/O for 10 seconds... 00:32:42.039 Latency(us) 00:32:42.039 [2024-12-05T13:04:13.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.039 Nvme0n1 : 1.00 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:32:42.039 [2024-12-05T13:04:13.565Z] =================================================================================================================== 00:32:42.039 [2024-12-05T13:04:13.565Z] Total : 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:32:42.039 00:32:42.976 14:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6070389b-18b7-4f80-8d61-201651c32f6b 00:32:42.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.976 Nvme0n1 : 2.00 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:32:42.976 [2024-12-05T13:04:14.502Z] =================================================================================================================== 00:32:42.976 [2024-12-05T13:04:14.502Z] Total : 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:32:42.976 00:32:43.234 true 00:32:43.234 14:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6070389b-18b7-4f80-8d61-201651c32f6b 00:32:43.234 14:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:43.492 14:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:43.492 14:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:43.492 14:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2381865 00:32:44.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.059 Nvme0n1 : 3.00 15324.67 59.86 0.00 0.00 0.00 0.00 0.00 00:32:44.059 [2024-12-05T13:04:15.585Z] =================================================================================================================== 00:32:44.059 [2024-12-05T13:04:15.585Z] Total : 15324.67 59.86 0.00 0.00 0.00 0.00 0.00 00:32:44.059 00:32:45.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:45.000 Nvme0n1 : 4.00 15430.50 60.28 0.00 0.00 0.00 0.00 0.00 00:32:45.000 [2024-12-05T13:04:16.526Z] =================================================================================================================== 00:32:45.000 [2024-12-05T13:04:16.526Z] Total : 15430.50 60.28 0.00 0.00 0.00 0.00 0.00 00:32:45.000 00:32:46.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:46.381 Nvme0n1 : 5.00 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:32:46.381 [2024-12-05T13:04:17.907Z] =================================================================================================================== 00:32:46.381 [2024-12-05T13:04:17.907Z] Total : 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:32:46.381 00:32:47.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:47.010 Nvme0n1 : 6.00 15557.50 60.77 0.00 0.00 0.00 0.00 0.00 00:32:47.010 [2024-12-05T13:04:18.536Z] =================================================================================================================== 00:32:47.010 [2024-12-05T13:04:18.536Z] Total : 15557.50 60.77 0.00 0.00 0.00 0.00 0.00 00:32:47.010 00:32:48.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:48.005 Nvme0n1 : 7.00 15584.71 60.88 0.00 0.00 0.00 0.00 0.00 00:32:48.005 [2024-12-05T13:04:19.531Z] =================================================================================================================== 00:32:48.005 [2024-12-05T13:04:19.531Z] Total : 15584.71 60.88 0.00 0.00 0.00 0.00 0.00 00:32:48.005 00:32:49.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:49.383 Nvme0n1 : 8.00 15621.00 61.02 0.00 0.00 0.00 0.00 0.00 00:32:49.383 [2024-12-05T13:04:20.909Z] =================================================================================================================== 00:32:49.383 [2024-12-05T13:04:20.909Z] Total : 15621.00 61.02 0.00 0.00 0.00 0.00 0.00 00:32:49.383 00:32:50.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:50.319 Nvme0n1 : 9.00 15677.44 61.24 0.00 0.00 0.00 0.00 0.00 00:32:50.319 [2024-12-05T13:04:21.845Z] =================================================================================================================== 00:32:50.319 [2024-12-05T13:04:21.845Z] Total : 15677.44 61.24 0.00 0.00 0.00 0.00 0.00 00:32:50.319 00:32:51.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:51.257 Nvme0n1 : 10.00 15709.90 61.37 0.00 0.00 0.00 0.00 0.00 00:32:51.257 [2024-12-05T13:04:22.783Z] =================================================================================================================== 00:32:51.257 [2024-12-05T13:04:22.783Z] Total : 15709.90 61.37 0.00 0.00 0.00 0.00 0.00 00:32:51.257 00:32:51.257 00:32:51.257 Latency(us) 00:32:51.257 [2024-12-05T13:04:22.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:51.257 Nvme0n1 : 10.01 15708.79 61.36 0.00 0.00 8143.84 7233.23 18544.26 00:32:51.257 [2024-12-05T13:04:22.783Z] =================================================================================================================== 00:32:51.257 [2024-12-05T13:04:22.783Z] Total : 15708.79 61.36 0.00 0.00 8143.84 7233.23 18544.26 00:32:51.257 { 00:32:51.257 "results": [ 00:32:51.257 { 00:32:51.257 "job": "Nvme0n1", 00:32:51.257 "core_mask": "0x2", 00:32:51.257 "workload": "randwrite", 00:32:51.257 "status": "finished", 00:32:51.257 "queue_depth": 128, 00:32:51.257 "io_size": 4096, 00:32:51.257 "runtime": 10.008852, 00:32:51.257 "iops": 15708.794575042173, 00:32:51.257 "mibps": 61.36247880875849, 00:32:51.257 "io_failed": 0, 00:32:51.257 "io_timeout": 0, 00:32:51.257 "avg_latency_us": 8143.840068332435, 00:32:51.257 "min_latency_us": 7233.2325925925925, 00:32:51.257 "max_latency_us": 18544.26074074074 00:32:51.257 } 00:32:51.257 ], 00:32:51.257 "core_count": 1 00:32:51.257 } 00:32:51.257 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2381730 00:32:51.257 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2381730 ']' 00:32:51.257 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2381730 00:32:51.257 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:51.257 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:51.257 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2381730 00:32:51.257 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:51.257 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:51.257 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2381730' 00:32:51.257 killing process with pid 2381730 00:32:51.257 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2381730 00:32:51.257 Received shutdown signal, test time was about 10.000000 seconds 00:32:51.257 00:32:51.257 Latency(us) 00:32:51.257 [2024-12-05T13:04:22.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.257 [2024-12-05T13:04:22.783Z] =================================================================================================================== 00:32:51.257 [2024-12-05T13:04:22.783Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:51.257 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2381730 00:32:51.516 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:51.774 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:52.031 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6070389b-18b7-4f80-8d61-201651c32f6b 00:32:52.031 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2379142 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2379142 00:32:52.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2379142 Killed "${NVMF_APP[@]}" "$@" 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2383189 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2383189 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2383189 ']' 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:52.292 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:52.292 [2024-12-05 14:04:23.748991] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:52.292 [2024-12-05 14:04:23.750113] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:32:52.292 [2024-12-05 14:04:23.750181] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.552 [2024-12-05 14:04:23.824111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.552 [2024-12-05 14:04:23.881166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:52.552 [2024-12-05 14:04:23.881219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:52.552 [2024-12-05 14:04:23.881247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:52.552 [2024-12-05 14:04:23.881258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:52.552 [2024-12-05 14:04:23.881268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:52.552 [2024-12-05 14:04:23.881907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.552 [2024-12-05 14:04:23.980211] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:52.552 [2024-12-05 14:04:23.980512] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:52.552 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.552 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:52.552 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:52.552 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:52.552 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:52.552 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:52.552 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:52.811 [2024-12-05 14:04:24.288658] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:52.811 [2024-12-05 14:04:24.288818] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:52.811 [2024-12-05 14:04:24.288867] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:52.811 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:52.811 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a01e7f02-4376-41d7-aa75-d79475b7ecab 00:32:52.811 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a01e7f02-4376-41d7-aa75-d79475b7ecab 00:32:52.811 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:52.811 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:52.811 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:52.811 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:52.811 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:53.069 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a01e7f02-4376-41d7-aa75-d79475b7ecab -t 2000 00:32:53.327 [ 00:32:53.327 { 00:32:53.327 "name": "a01e7f02-4376-41d7-aa75-d79475b7ecab", 00:32:53.327 "aliases": [ 00:32:53.327 "lvs/lvol" 00:32:53.327 ], 00:32:53.327 "product_name": "Logical Volume", 00:32:53.327 "block_size": 4096, 00:32:53.327 "num_blocks": 38912, 00:32:53.327 "uuid": "a01e7f02-4376-41d7-aa75-d79475b7ecab", 00:32:53.327 "assigned_rate_limits": { 00:32:53.327 "rw_ios_per_sec": 0, 00:32:53.327 "rw_mbytes_per_sec": 0, 00:32:53.327 "r_mbytes_per_sec": 0, 00:32:53.327 "w_mbytes_per_sec": 0 00:32:53.327 }, 00:32:53.327 "claimed": false, 00:32:53.327 "zoned": false, 00:32:53.327 "supported_io_types": { 00:32:53.327 "read": true, 00:32:53.327 "write": true, 00:32:53.327 "unmap": true, 00:32:53.327 "flush": false, 00:32:53.327 "reset": true, 00:32:53.327 "nvme_admin": false, 00:32:53.327 "nvme_io": false, 00:32:53.327 "nvme_io_md": false, 00:32:53.327 "write_zeroes": true, 00:32:53.327 "zcopy": false, 00:32:53.327 "get_zone_info": false, 00:32:53.327 "zone_management": false, 00:32:53.327 "zone_append": false, 00:32:53.327 "compare": false, 00:32:53.327 "compare_and_write": false, 00:32:53.327 "abort": false, 00:32:53.327 "seek_hole": true, 00:32:53.327 "seek_data": true, 00:32:53.327 "copy": false, 00:32:53.327 "nvme_iov_md": false 00:32:53.327 }, 00:32:53.327 "driver_specific": { 00:32:53.327 "lvol": { 00:32:53.327 "lvol_store_uuid": "6070389b-18b7-4f80-8d61-201651c32f6b", 00:32:53.327 "base_bdev": "aio_bdev", 00:32:53.327 "thin_provision": false, 00:32:53.327 "num_allocated_clusters": 38, 00:32:53.327 "snapshot": false, 00:32:53.327 "clone": false, 00:32:53.327 "esnap_clone": false 00:32:53.327 } 00:32:53.327 } 00:32:53.327 } 00:32:53.327 ] 00:32:53.327 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:53.327 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6070389b-18b7-4f80-8d61-201651c32f6b 00:32:53.327 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:53.894 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:53.894 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6070389b-18b7-4f80-8d61-201651c32f6b 00:32:53.894 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:53.894 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:53.894 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:54.153 [2024-12-05 14:04:25.646452] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:54.153 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6070389b-18b7-4f80-8d61-201651c32f6b 00:32:54.153 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:54.153 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6070389b-18b7-4f80-8d61-201651c32f6b 00:32:54.153 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:54.153 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:54.153 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:54.153 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:54.153 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:54.153 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:54.153 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:54.153 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:54.153 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6070389b-18b7-4f80-8d61-201651c32f6b 00:32:54.719 request: 00:32:54.719 { 00:32:54.719 "uuid": "6070389b-18b7-4f80-8d61-201651c32f6b", 00:32:54.719 "method": "bdev_lvol_get_lvstores", 00:32:54.719 "req_id": 1 00:32:54.719 } 00:32:54.719 Got JSON-RPC error response 00:32:54.719 response: 00:32:54.719 { 00:32:54.719 "code": -19, 00:32:54.719 "message": "No such device" 00:32:54.719 } 00:32:54.719 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:54.719 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:54.719 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:54.719 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:54.719 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:54.719 aio_bdev 00:32:54.719 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a01e7f02-4376-41d7-aa75-d79475b7ecab 00:32:54.719 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a01e7f02-4376-41d7-aa75-d79475b7ecab 00:32:54.719 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:54.719 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:54.720 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:54.720 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:54.720 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:54.978 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a01e7f02-4376-41d7-aa75-d79475b7ecab -t 2000 00:32:55.236 [ 00:32:55.236 { 00:32:55.236 "name": "a01e7f02-4376-41d7-aa75-d79475b7ecab", 00:32:55.236 "aliases": [ 00:32:55.236 "lvs/lvol" 00:32:55.236 ], 00:32:55.236 "product_name": "Logical Volume", 00:32:55.236 "block_size": 4096, 00:32:55.236 "num_blocks": 38912, 00:32:55.236 "uuid": "a01e7f02-4376-41d7-aa75-d79475b7ecab", 00:32:55.236 "assigned_rate_limits": { 00:32:55.236 "rw_ios_per_sec": 0, 00:32:55.236 "rw_mbytes_per_sec": 0, 00:32:55.236 "r_mbytes_per_sec": 0, 00:32:55.236 "w_mbytes_per_sec": 0 00:32:55.236 }, 00:32:55.236 "claimed": false, 00:32:55.236 "zoned": false, 00:32:55.236 "supported_io_types": { 00:32:55.236 "read": true, 00:32:55.236 "write": true, 00:32:55.236 "unmap": true, 00:32:55.236 "flush": false, 00:32:55.236 "reset": true, 00:32:55.236 "nvme_admin": false, 00:32:55.236 "nvme_io": false, 00:32:55.236 "nvme_io_md": false, 00:32:55.236 "write_zeroes": true, 00:32:55.236 "zcopy": false, 00:32:55.236 "get_zone_info": false, 00:32:55.236 "zone_management": false, 00:32:55.236 "zone_append": false, 00:32:55.236 "compare": false, 00:32:55.236 "compare_and_write": false, 00:32:55.236 "abort": false, 00:32:55.236 "seek_hole": true, 00:32:55.236 "seek_data": true, 00:32:55.236 "copy": false, 00:32:55.236 "nvme_iov_md": false 00:32:55.236 }, 00:32:55.236 "driver_specific": { 00:32:55.236 "lvol": { 00:32:55.236 "lvol_store_uuid": "6070389b-18b7-4f80-8d61-201651c32f6b", 00:32:55.236 "base_bdev": "aio_bdev", 00:32:55.236 "thin_provision": false, 00:32:55.236 "num_allocated_clusters": 38, 00:32:55.236 "snapshot": false, 00:32:55.236 "clone": false, 00:32:55.236 "esnap_clone": false 00:32:55.236 } 00:32:55.236 } 00:32:55.236 } 00:32:55.236 ] 00:32:55.494 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:55.494 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6070389b-18b7-4f80-8d61-201651c32f6b 00:32:55.494 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:55.754 14:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:55.754 14:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6070389b-18b7-4f80-8d61-201651c32f6b 00:32:55.754 14:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:56.013 14:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:56.013 14:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a01e7f02-4376-41d7-aa75-d79475b7ecab 00:32:56.272 14:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6070389b-18b7-4f80-8d61-201651c32f6b 00:32:56.530 14:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:56.788 00:32:56.788 real 0m19.699s 00:32:56.788 user 0m36.731s 00:32:56.788 sys 0m4.781s 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:56.788 ************************************ 00:32:56.788 END TEST lvs_grow_dirty 00:32:56.788 ************************************ 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:56.788 nvmf_trace.0 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:56.788 rmmod nvme_tcp 00:32:56.788 rmmod nvme_fabrics 00:32:56.788 rmmod nvme_keyring 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2383189 ']' 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2383189 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2383189 ']' 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2383189 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:56.788 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2383189 00:32:57.046 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2383189' 00:32:57.047 killing process with pid 2383189 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2383189 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2383189 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.047 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.579 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:59.579 00:32:59.580 real 0m42.974s 00:32:59.580 user 0m55.800s 00:32:59.580 sys 0m8.648s 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:59.580 ************************************ 00:32:59.580 END TEST nvmf_lvs_grow 00:32:59.580 ************************************ 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:59.580 ************************************ 00:32:59.580 START TEST nvmf_bdev_io_wait 00:32:59.580 ************************************ 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:59.580 * Looking for test storage... 00:32:59.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:59.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.580 --rc genhtml_branch_coverage=1 00:32:59.580 --rc genhtml_function_coverage=1 00:32:59.580 --rc genhtml_legend=1 00:32:59.580 --rc geninfo_all_blocks=1 00:32:59.580 --rc geninfo_unexecuted_blocks=1 00:32:59.580 00:32:59.580 ' 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:59.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.580 --rc genhtml_branch_coverage=1 00:32:59.580 --rc genhtml_function_coverage=1 00:32:59.580 --rc genhtml_legend=1 00:32:59.580 --rc geninfo_all_blocks=1 00:32:59.580 --rc geninfo_unexecuted_blocks=1 00:32:59.580 00:32:59.580 ' 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:59.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.580 --rc genhtml_branch_coverage=1 00:32:59.580 --rc genhtml_function_coverage=1 00:32:59.580 --rc genhtml_legend=1 00:32:59.580 --rc geninfo_all_blocks=1 00:32:59.580 --rc geninfo_unexecuted_blocks=1 00:32:59.580 00:32:59.580 ' 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:59.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.580 --rc genhtml_branch_coverage=1 00:32:59.580 --rc genhtml_function_coverage=1 00:32:59.580 --rc genhtml_legend=1 00:32:59.580 --rc geninfo_all_blocks=1 00:32:59.580 --rc geninfo_unexecuted_blocks=1 00:32:59.580 00:32:59.580 ' 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.580 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:59.581 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:01.484 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:01.484 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:01.484 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:01.484 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:01.484 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:01.484 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:01.484 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:01.484 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:01.484 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:01.485 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:01.485 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:01.485 Found net devices under 0000:09:00.0: cvl_0_0 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:01.485 Found net devices under 0000:09:00.1: cvl_0_1 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:01.485 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:01.486 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:01.486 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:01.486 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:01.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:01.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:33:01.744 00:33:01.744 --- 10.0.0.2 ping statistics --- 00:33:01.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.744 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:01.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:01.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:33:01.744 00:33:01.744 --- 10.0.0.1 ping statistics --- 00:33:01.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.744 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2385718 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2385718 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2385718 ']' 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.744 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:01.745 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:01.745 [2024-12-05 14:04:33.140952] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:01.745 [2024-12-05 14:04:33.142025] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:33:01.745 [2024-12-05 14:04:33.142090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:01.745 [2024-12-05 14:04:33.224600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:02.004 [2024-12-05 14:04:33.297057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:02.004 [2024-12-05 14:04:33.297107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:02.004 [2024-12-05 14:04:33.297141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:02.004 [2024-12-05 14:04:33.297157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:02.004 [2024-12-05 14:04:33.297171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:02.004 [2024-12-05 14:04:33.299065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.004 [2024-12-05 14:04:33.299126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:02.004 [2024-12-05 14:04:33.299188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:02.004 [2024-12-05 14:04:33.299195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.004 [2024-12-05 14:04:33.299789] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:02.004 [2024-12-05 14:04:33.464439] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:02.004 [2024-12-05 14:04:33.464663] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:02.004 [2024-12-05 14:04:33.465626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:02.004 [2024-12-05 14:04:33.466526] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:02.004 [2024-12-05 14:04:33.475985] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:02.004 Malloc0 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.004 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:02.263 [2024-12-05 14:04:33.532194] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2385745 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2385746 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2385749 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:02.263 { 00:33:02.263 "params": { 00:33:02.263 "name": "Nvme$subsystem", 00:33:02.263 "trtype": "$TEST_TRANSPORT", 00:33:02.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:02.263 "adrfam": "ipv4", 00:33:02.263 "trsvcid": "$NVMF_PORT", 00:33:02.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:02.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:02.263 "hdgst": ${hdgst:-false}, 00:33:02.263 "ddgst": ${ddgst:-false} 00:33:02.263 }, 00:33:02.263 "method": "bdev_nvme_attach_controller" 00:33:02.263 } 00:33:02.263 EOF 00:33:02.263 )") 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2385751 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:02.263 { 00:33:02.263 "params": { 00:33:02.263 "name": "Nvme$subsystem", 00:33:02.263 "trtype": "$TEST_TRANSPORT", 00:33:02.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:02.263 "adrfam": "ipv4", 00:33:02.263 "trsvcid": "$NVMF_PORT", 00:33:02.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:02.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:02.263 "hdgst": ${hdgst:-false}, 00:33:02.263 "ddgst": ${ddgst:-false} 00:33:02.263 }, 00:33:02.263 "method": "bdev_nvme_attach_controller" 00:33:02.263 } 00:33:02.263 EOF 00:33:02.263 )") 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:02.263 { 00:33:02.263 "params": { 00:33:02.263 "name": "Nvme$subsystem", 00:33:02.263 "trtype": "$TEST_TRANSPORT", 00:33:02.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:02.263 "adrfam": "ipv4", 00:33:02.263 "trsvcid": "$NVMF_PORT", 00:33:02.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:02.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:02.263 "hdgst": ${hdgst:-false}, 00:33:02.263 "ddgst": ${ddgst:-false} 00:33:02.263 }, 00:33:02.263 "method": "bdev_nvme_attach_controller" 00:33:02.263 } 00:33:02.263 EOF 00:33:02.263 )") 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:02.263 { 00:33:02.263 "params": { 00:33:02.263 "name": "Nvme$subsystem", 00:33:02.263 "trtype": "$TEST_TRANSPORT", 00:33:02.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:02.263 "adrfam": "ipv4", 00:33:02.263 "trsvcid": "$NVMF_PORT", 00:33:02.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:02.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:02.263 "hdgst": ${hdgst:-false}, 00:33:02.263 "ddgst": ${ddgst:-false} 00:33:02.263 }, 00:33:02.263 "method": "bdev_nvme_attach_controller" 00:33:02.263 } 00:33:02.263 EOF 00:33:02.263 )") 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2385745 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:02.263 "params": { 00:33:02.263 "name": "Nvme1", 00:33:02.263 "trtype": "tcp", 00:33:02.263 "traddr": "10.0.0.2", 00:33:02.263 "adrfam": "ipv4", 00:33:02.263 "trsvcid": "4420", 00:33:02.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:02.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:02.263 "hdgst": false, 00:33:02.263 "ddgst": false 00:33:02.263 }, 00:33:02.263 "method": "bdev_nvme_attach_controller" 00:33:02.263 }' 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:02.263 "params": { 00:33:02.263 "name": "Nvme1", 00:33:02.263 "trtype": "tcp", 00:33:02.263 "traddr": "10.0.0.2", 00:33:02.263 "adrfam": "ipv4", 00:33:02.263 "trsvcid": "4420", 00:33:02.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:02.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:02.263 "hdgst": false, 00:33:02.263 "ddgst": false 00:33:02.263 }, 00:33:02.263 "method": "bdev_nvme_attach_controller" 00:33:02.263 }' 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:02.263 "params": { 00:33:02.263 "name": "Nvme1", 00:33:02.263 "trtype": "tcp", 00:33:02.263 "traddr": "10.0.0.2", 00:33:02.263 "adrfam": "ipv4", 00:33:02.263 "trsvcid": "4420", 00:33:02.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:02.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:02.263 "hdgst": false, 00:33:02.263 "ddgst": false 00:33:02.263 }, 00:33:02.263 "method": "bdev_nvme_attach_controller" 00:33:02.263 }' 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:02.263 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:02.263 "params": { 00:33:02.263 "name": "Nvme1", 00:33:02.263 "trtype": "tcp", 00:33:02.263 "traddr": "10.0.0.2", 00:33:02.263 "adrfam": "ipv4", 00:33:02.263 "trsvcid": "4420", 00:33:02.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:02.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:02.263 "hdgst": false, 00:33:02.263 "ddgst": false 00:33:02.263 }, 00:33:02.263 "method": "bdev_nvme_attach_controller" 00:33:02.263 }' 00:33:02.263 [2024-12-05 14:04:33.584814] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:33:02.263 [2024-12-05 14:04:33.584815] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:33:02.263 [2024-12-05 14:04:33.584814] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:33:02.263 [2024-12-05 14:04:33.584828] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:33:02.263 [2024-12-05 14:04:33.584905] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-05 14:04:33.584907] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-05 14:04:33.584905] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:02.263 --proc-type=auto ] 00:33:02.263 [2024-12-05 14:04:33.584924] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:33:02.263 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:02.263 [2024-12-05 14:04:33.768217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.552 [2024-12-05 14:04:33.823964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:02.552 [2024-12-05 14:04:33.871602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.552 [2024-12-05 14:04:33.927886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:02.552 [2024-12-05 14:04:33.975058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.552 [2024-12-05 14:04:34.031260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:02.810 [2024-12-05 14:04:34.050627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.810 [2024-12-05 14:04:34.102390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:02.810 Running I/O for 1 seconds... 00:33:02.810 Running I/O for 1 seconds... 00:33:02.810 Running I/O for 1 seconds... 00:33:03.068 Running I/O for 1 seconds... 00:33:03.999 10626.00 IOPS, 41.51 MiB/s 00:33:04.000 Latency(us) 00:33:04.000 [2024-12-05T13:04:35.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.000 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:04.000 Nvme1n1 : 1.01 10682.45 41.73 0.00 0.00 11935.34 4611.79 13883.92 00:33:04.000 [2024-12-05T13:04:35.526Z] =================================================================================================================== 00:33:04.000 [2024-12-05T13:04:35.526Z] Total : 10682.45 41.73 0.00 0.00 11935.34 4611.79 13883.92 00:33:04.000 8765.00 IOPS, 34.24 MiB/s [2024-12-05T13:04:35.526Z] 158584.00 IOPS, 619.47 MiB/s 00:33:04.000 Latency(us) 00:33:04.000 [2024-12-05T13:04:35.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.000 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:04.000 Nvme1n1 : 1.00 158266.29 618.23 0.00 0.00 804.35 327.68 1990.35 00:33:04.000 [2024-12-05T13:04:35.526Z] =================================================================================================================== 00:33:04.000 [2024-12-05T13:04:35.526Z] Total : 158266.29 618.23 0.00 0.00 804.35 327.68 1990.35 00:33:04.000 9645.00 IOPS, 37.68 MiB/s 00:33:04.000 Latency(us) 00:33:04.000 [2024-12-05T13:04:35.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.000 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:04.000 Nvme1n1 : 1.01 9729.29 38.01 0.00 0.00 13115.53 2512.21 18932.62 00:33:04.000 [2024-12-05T13:04:35.526Z] =================================================================================================================== 00:33:04.000 [2024-12-05T13:04:35.526Z] Total : 9729.29 38.01 0.00 0.00 13115.53 2512.21 18932.62 00:33:04.000 00:33:04.000 Latency(us) 00:33:04.000 [2024-12-05T13:04:35.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.000 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:04.000 Nvme1n1 : 1.05 8468.97 33.08 0.00 0.00 14458.97 4781.70 48933.55 00:33:04.000 [2024-12-05T13:04:35.526Z] =================================================================================================================== 00:33:04.000 [2024-12-05T13:04:35.526Z] Total : 8468.97 33.08 0.00 0.00 14458.97 4781.70 48933.55 00:33:04.000 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2385746 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2385749 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2385751 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:04.257 rmmod nvme_tcp 00:33:04.257 rmmod nvme_fabrics 00:33:04.257 rmmod nvme_keyring 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2385718 ']' 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2385718 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2385718 ']' 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2385718 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2385718 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2385718' 00:33:04.257 killing process with pid 2385718 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2385718 00:33:04.257 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2385718 00:33:04.515 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:04.515 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:04.515 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:04.515 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:04.515 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:04.515 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:04.515 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:04.515 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:04.515 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:04.515 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.515 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:04.515 14:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.418 14:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:06.418 00:33:06.418 real 0m7.287s 00:33:06.418 user 0m14.463s 00:33:06.418 sys 0m4.176s 00:33:06.418 14:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:06.418 14:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:06.418 ************************************ 00:33:06.418 END TEST nvmf_bdev_io_wait 00:33:06.418 ************************************ 00:33:06.418 14:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:06.418 14:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:06.418 14:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:06.418 14:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:06.703 ************************************ 00:33:06.703 START TEST nvmf_queue_depth 00:33:06.703 ************************************ 00:33:06.703 14:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:06.703 * Looking for test storage... 00:33:06.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:06.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.703 --rc genhtml_branch_coverage=1 00:33:06.703 --rc genhtml_function_coverage=1 00:33:06.703 --rc genhtml_legend=1 00:33:06.703 --rc geninfo_all_blocks=1 00:33:06.703 --rc geninfo_unexecuted_blocks=1 00:33:06.703 00:33:06.703 ' 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:06.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.703 --rc genhtml_branch_coverage=1 00:33:06.703 --rc genhtml_function_coverage=1 00:33:06.703 --rc genhtml_legend=1 00:33:06.703 --rc geninfo_all_blocks=1 00:33:06.703 --rc geninfo_unexecuted_blocks=1 00:33:06.703 00:33:06.703 ' 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:06.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.703 --rc genhtml_branch_coverage=1 00:33:06.703 --rc genhtml_function_coverage=1 00:33:06.703 --rc genhtml_legend=1 00:33:06.703 --rc geninfo_all_blocks=1 00:33:06.703 --rc geninfo_unexecuted_blocks=1 00:33:06.703 00:33:06.703 ' 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:06.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.703 --rc genhtml_branch_coverage=1 00:33:06.703 --rc genhtml_function_coverage=1 00:33:06.703 --rc genhtml_legend=1 00:33:06.703 --rc geninfo_all_blocks=1 00:33:06.703 --rc geninfo_unexecuted_blocks=1 00:33:06.703 00:33:06.703 ' 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:06.703 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:06.704 14:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:09.258 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:09.259 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:09.259 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:09.259 Found net devices under 0000:09:00.0: cvl_0_0 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:09.259 Found net devices under 0000:09:00.1: cvl_0_1 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:09.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:09.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:33:09.259 00:33:09.259 --- 10.0.0.2 ping statistics --- 00:33:09.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.259 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:09.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:09.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:33:09.259 00:33:09.259 --- 10.0.0.1 ping statistics --- 00:33:09.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.259 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:09.259 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2387972 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2387972 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2387972 ']' 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:09.260 [2024-12-05 14:04:40.400528] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:09.260 [2024-12-05 14:04:40.401603] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:33:09.260 [2024-12-05 14:04:40.401668] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.260 [2024-12-05 14:04:40.475279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.260 [2024-12-05 14:04:40.529245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.260 [2024-12-05 14:04:40.529302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.260 [2024-12-05 14:04:40.529329] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.260 [2024-12-05 14:04:40.529341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.260 [2024-12-05 14:04:40.529350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.260 [2024-12-05 14:04:40.529979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.260 [2024-12-05 14:04:40.613733] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:09.260 [2024-12-05 14:04:40.614010] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:09.260 [2024-12-05 14:04:40.662658] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:09.260 Malloc0 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:09.260 [2024-12-05 14:04:40.722708] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2388091 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2388091 /var/tmp/bdevperf.sock 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2388091 ']' 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:09.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.260 14:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:09.260 [2024-12-05 14:04:40.771739] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:33:09.260 [2024-12-05 14:04:40.771801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2388091 ] 00:33:09.518 [2024-12-05 14:04:40.838179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.518 [2024-12-05 14:04:40.892450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.518 14:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.518 14:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:09.518 14:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:09.518 14:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.518 14:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:09.778 NVMe0n1 00:33:09.778 14:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.778 14:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:09.778 Running I/O for 10 seconds... 00:33:12.092 8192.00 IOPS, 32.00 MiB/s [2024-12-05T13:04:44.187Z] 8211.50 IOPS, 32.08 MiB/s [2024-12-05T13:04:45.570Z] 8377.67 IOPS, 32.73 MiB/s [2024-12-05T13:04:46.504Z] 8407.50 IOPS, 32.84 MiB/s [2024-12-05T13:04:47.438Z] 8392.00 IOPS, 32.78 MiB/s [2024-12-05T13:04:48.391Z] 8365.50 IOPS, 32.68 MiB/s [2024-12-05T13:04:49.323Z] 8402.00 IOPS, 32.82 MiB/s [2024-12-05T13:04:50.256Z] 8435.88 IOPS, 32.95 MiB/s [2024-12-05T13:04:51.634Z] 8419.56 IOPS, 32.89 MiB/s [2024-12-05T13:04:51.634Z] 8416.20 IOPS, 32.88 MiB/s 00:33:20.108 Latency(us) 00:33:20.108 [2024-12-05T13:04:51.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.108 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:20.108 Verification LBA range: start 0x0 length 0x4000 00:33:20.108 NVMe0n1 : 10.07 8459.43 33.04 0.00 0.00 120513.03 9854.67 69905.07 00:33:20.108 [2024-12-05T13:04:51.634Z] =================================================================================================================== 00:33:20.108 [2024-12-05T13:04:51.634Z] Total : 8459.43 33.04 0.00 0.00 120513.03 9854.67 69905.07 00:33:20.108 { 00:33:20.108 "results": [ 00:33:20.108 { 00:33:20.108 "job": "NVMe0n1", 00:33:20.108 "core_mask": "0x1", 00:33:20.108 "workload": "verify", 00:33:20.109 "status": "finished", 00:33:20.109 "verify_range": { 00:33:20.109 "start": 0, 00:33:20.109 "length": 16384 00:33:20.109 }, 00:33:20.109 "queue_depth": 1024, 00:33:20.109 "io_size": 4096, 00:33:20.109 "runtime": 10.066285, 00:33:20.109 "iops": 8459.426690184115, 00:33:20.109 "mibps": 33.0446355085317, 00:33:20.109 "io_failed": 0, 00:33:20.109 "io_timeout": 0, 00:33:20.109 "avg_latency_us": 120513.0314321988, 00:33:20.109 "min_latency_us": 9854.672592592593, 00:33:20.109 "max_latency_us": 69905.06666666667 00:33:20.109 } 00:33:20.109 ], 00:33:20.109 "core_count": 1 00:33:20.109 } 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2388091 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2388091 ']' 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2388091 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2388091 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2388091' 00:33:20.109 killing process with pid 2388091 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2388091 00:33:20.109 Received shutdown signal, test time was about 10.000000 seconds 00:33:20.109 00:33:20.109 Latency(us) 00:33:20.109 [2024-12-05T13:04:51.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.109 [2024-12-05T13:04:51.635Z] =================================================================================================================== 00:33:20.109 [2024-12-05T13:04:51.635Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2388091 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.109 rmmod nvme_tcp 00:33:20.109 rmmod nvme_fabrics 00:33:20.109 rmmod nvme_keyring 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2387972 ']' 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2387972 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2387972 ']' 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2387972 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2387972 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2387972' 00:33:20.109 killing process with pid 2387972 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2387972 00:33:20.109 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2387972 00:33:20.369 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.369 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.369 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.369 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:20.369 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:20.369 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.369 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.369 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.369 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.369 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.369 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.369 14:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.907 14:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.907 00:33:22.907 real 0m15.899s 00:33:22.907 user 0m20.908s 00:33:22.907 sys 0m3.786s 00:33:22.907 14:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.907 14:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:22.907 ************************************ 00:33:22.907 END TEST nvmf_queue_depth 00:33:22.907 ************************************ 00:33:22.907 14:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:22.907 14:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:22.907 14:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.907 14:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:22.907 ************************************ 00:33:22.907 START TEST nvmf_target_multipath 00:33:22.907 ************************************ 00:33:22.907 14:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:22.907 * Looking for test storage... 00:33:22.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.907 14:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:22.907 14:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:33:22.907 14:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:22.907 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:22.907 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.907 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.907 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:22.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.908 --rc genhtml_branch_coverage=1 00:33:22.908 --rc genhtml_function_coverage=1 00:33:22.908 --rc genhtml_legend=1 00:33:22.908 --rc geninfo_all_blocks=1 00:33:22.908 --rc geninfo_unexecuted_blocks=1 00:33:22.908 00:33:22.908 ' 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:22.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.908 --rc genhtml_branch_coverage=1 00:33:22.908 --rc genhtml_function_coverage=1 00:33:22.908 --rc genhtml_legend=1 00:33:22.908 --rc geninfo_all_blocks=1 00:33:22.908 --rc geninfo_unexecuted_blocks=1 00:33:22.908 00:33:22.908 ' 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:22.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.908 --rc genhtml_branch_coverage=1 00:33:22.908 --rc genhtml_function_coverage=1 00:33:22.908 --rc genhtml_legend=1 00:33:22.908 --rc geninfo_all_blocks=1 00:33:22.908 --rc geninfo_unexecuted_blocks=1 00:33:22.908 00:33:22.908 ' 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:22.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.908 --rc genhtml_branch_coverage=1 00:33:22.908 --rc genhtml_function_coverage=1 00:33:22.908 --rc genhtml_legend=1 00:33:22.908 --rc geninfo_all_blocks=1 00:33:22.908 --rc geninfo_unexecuted_blocks=1 00:33:22.908 00:33:22.908 ' 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:22.908 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:22.909 14:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:24.816 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:24.817 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:24.817 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:24.817 Found net devices under 0000:09:00.0: cvl_0_0 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:24.817 Found net devices under 0000:09:00.1: cvl_0_1 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:24.817 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:24.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:24.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:33:24.818 00:33:24.818 --- 10.0.0.2 ping statistics --- 00:33:24.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.818 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:24.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:24.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:33:24.818 00:33:24.818 --- 10.0.0.1 ping statistics --- 00:33:24.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.818 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:24.818 only one NIC for nvmf test 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:24.818 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:25.077 rmmod nvme_tcp 00:33:25.077 rmmod nvme_fabrics 00:33:25.077 rmmod nvme_keyring 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.077 14:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:26.987 00:33:26.987 real 0m4.542s 00:33:26.987 user 0m0.948s 00:33:26.987 sys 0m1.612s 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:26.987 ************************************ 00:33:26.987 END TEST nvmf_target_multipath 00:33:26.987 ************************************ 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:26.987 ************************************ 00:33:26.987 START TEST nvmf_zcopy 00:33:26.987 ************************************ 00:33:26.987 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:27.248 * Looking for test storage... 00:33:27.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:27.248 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:27.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.248 --rc genhtml_branch_coverage=1 00:33:27.248 --rc genhtml_function_coverage=1 00:33:27.248 --rc genhtml_legend=1 00:33:27.248 --rc geninfo_all_blocks=1 00:33:27.248 --rc geninfo_unexecuted_blocks=1 00:33:27.249 00:33:27.249 ' 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:27.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.249 --rc genhtml_branch_coverage=1 00:33:27.249 --rc genhtml_function_coverage=1 00:33:27.249 --rc genhtml_legend=1 00:33:27.249 --rc geninfo_all_blocks=1 00:33:27.249 --rc geninfo_unexecuted_blocks=1 00:33:27.249 00:33:27.249 ' 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:27.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.249 --rc genhtml_branch_coverage=1 00:33:27.249 --rc genhtml_function_coverage=1 00:33:27.249 --rc genhtml_legend=1 00:33:27.249 --rc geninfo_all_blocks=1 00:33:27.249 --rc geninfo_unexecuted_blocks=1 00:33:27.249 00:33:27.249 ' 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:27.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.249 --rc genhtml_branch_coverage=1 00:33:27.249 --rc genhtml_function_coverage=1 00:33:27.249 --rc genhtml_legend=1 00:33:27.249 --rc geninfo_all_blocks=1 00:33:27.249 --rc geninfo_unexecuted_blocks=1 00:33:27.249 00:33:27.249 ' 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:27.249 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:27.250 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:27.250 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:29.784 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:29.785 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:29.785 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:29.785 Found net devices under 0000:09:00.0: cvl_0_0 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:29.785 Found net devices under 0000:09:00.1: cvl_0_1 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:29.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:29.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:33:29.785 00:33:29.785 --- 10.0.0.2 ping statistics --- 00:33:29.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.785 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:33:29.785 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:29.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:29.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:33:29.786 00:33:29.786 --- 10.0.0.1 ping statistics --- 00:33:29.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.786 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2393184 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2393184 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2393184 ']' 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:29.786 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:29.786 [2024-12-05 14:05:01.027054] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:29.786 [2024-12-05 14:05:01.028155] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:33:29.786 [2024-12-05 14:05:01.028211] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:29.786 [2024-12-05 14:05:01.103647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.786 [2024-12-05 14:05:01.160146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:29.786 [2024-12-05 14:05:01.160193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:29.786 [2024-12-05 14:05:01.160221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:29.786 [2024-12-05 14:05:01.160231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:29.786 [2024-12-05 14:05:01.160241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:29.786 [2024-12-05 14:05:01.160918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.786 [2024-12-05 14:05:01.250258] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:29.786 [2024-12-05 14:05:01.250617] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:29.786 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.786 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:29.786 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:29.786 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:29.786 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:29.786 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:29.786 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:29.786 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:29.786 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.786 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:29.786 [2024-12-05 14:05:01.301569] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.786 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.786 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:29.786 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.786 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.044 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.044 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:30.044 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.044 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.044 [2024-12-05 14:05:01.317684] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.044 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.044 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:30.044 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.044 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.044 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.045 malloc0 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:30.045 { 00:33:30.045 "params": { 00:33:30.045 "name": "Nvme$subsystem", 00:33:30.045 "trtype": "$TEST_TRANSPORT", 00:33:30.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.045 "adrfam": "ipv4", 00:33:30.045 "trsvcid": "$NVMF_PORT", 00:33:30.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.045 "hdgst": ${hdgst:-false}, 00:33:30.045 "ddgst": ${ddgst:-false} 00:33:30.045 }, 00:33:30.045 "method": "bdev_nvme_attach_controller" 00:33:30.045 } 00:33:30.045 EOF 00:33:30.045 )") 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:30.045 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:30.045 "params": { 00:33:30.045 "name": "Nvme1", 00:33:30.045 "trtype": "tcp", 00:33:30.045 "traddr": "10.0.0.2", 00:33:30.045 "adrfam": "ipv4", 00:33:30.045 "trsvcid": "4420", 00:33:30.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:30.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:30.045 "hdgst": false, 00:33:30.045 "ddgst": false 00:33:30.045 }, 00:33:30.045 "method": "bdev_nvme_attach_controller" 00:33:30.045 }' 00:33:30.045 [2024-12-05 14:05:01.394681] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:33:30.045 [2024-12-05 14:05:01.394770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393324 ] 00:33:30.045 [2024-12-05 14:05:01.460436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.045 [2024-12-05 14:05:01.515338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.614 Running I/O for 10 seconds... 00:33:32.489 5551.00 IOPS, 43.37 MiB/s [2024-12-05T13:05:04.953Z] 5643.00 IOPS, 44.09 MiB/s [2024-12-05T13:05:06.327Z] 5654.33 IOPS, 44.17 MiB/s [2024-12-05T13:05:07.265Z] 5674.75 IOPS, 44.33 MiB/s [2024-12-05T13:05:08.200Z] 5671.60 IOPS, 44.31 MiB/s [2024-12-05T13:05:09.140Z] 5675.33 IOPS, 44.34 MiB/s [2024-12-05T13:05:10.077Z] 5674.00 IOPS, 44.33 MiB/s [2024-12-05T13:05:11.016Z] 5675.38 IOPS, 44.34 MiB/s [2024-12-05T13:05:11.992Z] 5674.22 IOPS, 44.33 MiB/s [2024-12-05T13:05:11.992Z] 5680.30 IOPS, 44.38 MiB/s 00:33:40.466 Latency(us) 00:33:40.466 [2024-12-05T13:05:11.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.466 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:40.466 Verification LBA range: start 0x0 length 0x1000 00:33:40.466 Nvme1n1 : 10.01 5681.21 44.38 0.00 0.00 22467.38 409.60 30486.38 00:33:40.466 [2024-12-05T13:05:11.992Z] =================================================================================================================== 00:33:40.466 [2024-12-05T13:05:11.992Z] Total : 5681.21 44.38 0.00 0.00 22467.38 409.60 30486.38 00:33:40.733 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2394513 00:33:40.733 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:40.733 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:40.733 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:40.733 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:40.733 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:40.733 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:40.733 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.733 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.733 { 00:33:40.733 "params": { 00:33:40.733 "name": "Nvme$subsystem", 00:33:40.733 "trtype": "$TEST_TRANSPORT", 00:33:40.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.733 "adrfam": "ipv4", 00:33:40.733 "trsvcid": "$NVMF_PORT", 00:33:40.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.733 "hdgst": ${hdgst:-false}, 00:33:40.733 "ddgst": ${ddgst:-false} 00:33:40.733 }, 00:33:40.733 "method": "bdev_nvme_attach_controller" 00:33:40.733 } 00:33:40.733 EOF 00:33:40.733 )") 00:33:40.733 [2024-12-05 14:05:12.157497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.733 [2024-12-05 14:05:12.157540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.733 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:40.733 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:40.733 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:40.733 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:40.733 "params": { 00:33:40.733 "name": "Nvme1", 00:33:40.733 "trtype": "tcp", 00:33:40.733 "traddr": "10.0.0.2", 00:33:40.733 "adrfam": "ipv4", 00:33:40.733 "trsvcid": "4420", 00:33:40.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:40.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:40.733 "hdgst": false, 00:33:40.733 "ddgst": false 00:33:40.733 }, 00:33:40.733 "method": "bdev_nvme_attach_controller" 00:33:40.733 }' 00:33:40.733 [2024-12-05 14:05:12.165372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.733 [2024-12-05 14:05:12.165392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.733 [2024-12-05 14:05:12.173371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.733 [2024-12-05 14:05:12.173390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.733 [2024-12-05 14:05:12.181369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.733 [2024-12-05 14:05:12.181387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.733 [2024-12-05 14:05:12.189369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.733 [2024-12-05 14:05:12.189387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.733 [2024-12-05 14:05:12.197370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.733 [2024-12-05 14:05:12.197388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.733 [2024-12-05 14:05:12.201361] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:33:40.733 [2024-12-05 14:05:12.201453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2394513 ] 00:33:40.733 [2024-12-05 14:05:12.205369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.733 [2024-12-05 14:05:12.205389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.733 [2024-12-05 14:05:12.213371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.733 [2024-12-05 14:05:12.213390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.733 [2024-12-05 14:05:12.221369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.733 [2024-12-05 14:05:12.221388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.733 [2024-12-05 14:05:12.229369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.733 [2024-12-05 14:05:12.229412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.733 [2024-12-05 14:05:12.237370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.733 [2024-12-05 14:05:12.237388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.733 [2024-12-05 14:05:12.245369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.733 [2024-12-05 14:05:12.245388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.733 [2024-12-05 14:05:12.253374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.733 [2024-12-05 14:05:12.253393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.261388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.261407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.269154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.996 [2024-12-05 14:05:12.269373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.269393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.277429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.277462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.285435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.285497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.293371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.293390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.301376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.301396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.309370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.309389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.317370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.317388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.325369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.325387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.328777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.996 [2024-12-05 14:05:12.333369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.333387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.341372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.341391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.349434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.349484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.357434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.357469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.365433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.365468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.373438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.373500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.381432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.381481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.389433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.389468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.397371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.397390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.405429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.405463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.413433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.413469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.421451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.421504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.429376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.429411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.437373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.437392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.445381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.445427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.453375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.453412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.461381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.461427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.469373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.469409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.477387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.477408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.485370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.485404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.493369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.493387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.501370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.501388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.509373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.509393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.996 [2024-12-05 14:05:12.517374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.996 [2024-12-05 14:05:12.517410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.525377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.525433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.533374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.533410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.541375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.541410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.549374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.549412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.557376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.557414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 Running I/O for 5 seconds... 00:33:41.255 [2024-12-05 14:05:12.572802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.572830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.583919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.583945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.599472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.599501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.609062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.609086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.621073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.621099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.631840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.631864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.644453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.644486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.656594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.656622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.670775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.670801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.680317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.680343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.694533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.694561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.703733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.703758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.718249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.718288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.729269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.729298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.740054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.740093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.756210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.756249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.255 [2024-12-05 14:05:12.772019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.255 [2024-12-05 14:05:12.772044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.789393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.789443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.799566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.799591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.814337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.814376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.823838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.823864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.838347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.838372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.848573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.848600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.862437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.862487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.871971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.871996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.886370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.886394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.895751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.895792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.910078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.910116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.919379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.919403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.935050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.935074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.944817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.944842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.956843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.956867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.969475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.969502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.978602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.978627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:12.990549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:12.990574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:13.001126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:13.001165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:13.012096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:13.012119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:13.027527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:13.027552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.514 [2024-12-05 14:05:13.036684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.514 [2024-12-05 14:05:13.036711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.048602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.048629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.062164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.062190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.071593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.071618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.083224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.083263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.098130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.098155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.107838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.107862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.124155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.124179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.139817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.139856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.155593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.155634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.173654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.173679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.183107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.183131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.194815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.194838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.210293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.210319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.219614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.219641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.234176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.234201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.244620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.244647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.256434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.256473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.271332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.271358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.281441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.281481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.775 [2024-12-05 14:05:13.293305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.775 [2024-12-05 14:05:13.293331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.304428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.304455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.317080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.317107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.326798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.326824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.342554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.342581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.351960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.351985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.365828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.365854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.375298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.375323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.386898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.386924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.397729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.397754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.408666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.408707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.419941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.419966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.435696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.435722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.445483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.445510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.457273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.457301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.468073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.468098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.483493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.483520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.493390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.493441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.505034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.505059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.515771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.515795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.530194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.530233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.539232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.539257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.035 [2024-12-05 14:05:13.550956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.035 [2024-12-05 14:05:13.550981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 11711.00 IOPS, 91.49 MiB/s [2024-12-05T13:05:13.820Z] [2024-12-05 14:05:13.566599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.566626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.576265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.576290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.590600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.590626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.600817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.600841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.615623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.615649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.624826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.624851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.636776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.636817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.652185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.652210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.665692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.665728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.674916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.674941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.686770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.686795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.702143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.702168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.712099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.712125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.726750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.726790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.737012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.737038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.748666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.748708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.763391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.763443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.772802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.772827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.784830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.784854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.795845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.795871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.294 [2024-12-05 14:05:13.811140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.294 [2024-12-05 14:05:13.811167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.554 [2024-12-05 14:05:13.828609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.554 [2024-12-05 14:05:13.828637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.554 [2024-12-05 14:05:13.838477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.554 [2024-12-05 14:05:13.838502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.554 [2024-12-05 14:05:13.854131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.554 [2024-12-05 14:05:13.854168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.554 [2024-12-05 14:05:13.863515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.554 [2024-12-05 14:05:13.863542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.554 [2024-12-05 14:05:13.879340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.554 [2024-12-05 14:05:13.879365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.554 [2024-12-05 14:05:13.888934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.554 [2024-12-05 14:05:13.888959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.554 [2024-12-05 14:05:13.900612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.554 [2024-12-05 14:05:13.900648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.554 [2024-12-05 14:05:13.911285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.554 [2024-12-05 14:05:13.911326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.554 [2024-12-05 14:05:13.926908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.554 [2024-12-05 14:05:13.926933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.554 [2024-12-05 14:05:13.935871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.555 [2024-12-05 14:05:13.935897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.555 [2024-12-05 14:05:13.950117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.555 [2024-12-05 14:05:13.950143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.555 [2024-12-05 14:05:13.959606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.555 [2024-12-05 14:05:13.959632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.555 [2024-12-05 14:05:13.973920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.555 [2024-12-05 14:05:13.973945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.555 [2024-12-05 14:05:13.983532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.555 [2024-12-05 14:05:13.983559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.555 [2024-12-05 14:05:13.995402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.555 [2024-12-05 14:05:13.995450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.555 [2024-12-05 14:05:14.011107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.555 [2024-12-05 14:05:14.011132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.555 [2024-12-05 14:05:14.020266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.555 [2024-12-05 14:05:14.020306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.555 [2024-12-05 14:05:14.034531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.555 [2024-12-05 14:05:14.034559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.555 [2024-12-05 14:05:14.044156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.555 [2024-12-05 14:05:14.044180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.555 [2024-12-05 14:05:14.058820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.555 [2024-12-05 14:05:14.058844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.555 [2024-12-05 14:05:14.068340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.555 [2024-12-05 14:05:14.068365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.082000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.082026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.091445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.091472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.102915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.102941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.113650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.113677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.124463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.124514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.137459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.137487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.147042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.147067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.158916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.158940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.174479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.174506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.183972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.184013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.199629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.199656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.217261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.217286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.226770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.226797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.238823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.238849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.249526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.249553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.260387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.260437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.272983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.273009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.282536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.282562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.298539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.298566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.308361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.308386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.322368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.322406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.814 [2024-12-05 14:05:14.332072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.814 [2024-12-05 14:05:14.332096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.347460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.347487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.357411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.357458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.369226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.369251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.380080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.380119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.392682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.392724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.402429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.402455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.414061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.414085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.425039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.425062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.438794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.438820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.447826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.447851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.462158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.462182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.471566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.471593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.487824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.487850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.503513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.503540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.520913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.520938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.531563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.531589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.547668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.547694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.557449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.557476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 11735.50 IOPS, 91.68 MiB/s [2024-12-05T13:05:14.599Z] [2024-12-05 14:05:14.569345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.569370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.580113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.580137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.073 [2024-12-05 14:05:14.596310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.073 [2024-12-05 14:05:14.596336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.331 [2024-12-05 14:05:14.610860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.331 [2024-12-05 14:05:14.610887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.331 [2024-12-05 14:05:14.620505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.331 [2024-12-05 14:05:14.620533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.331 [2024-12-05 14:05:14.635646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.331 [2024-12-05 14:05:14.635690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.331 [2024-12-05 14:05:14.653270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.331 [2024-12-05 14:05:14.653313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.331 [2024-12-05 14:05:14.663378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.331 [2024-12-05 14:05:14.663426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.331 [2024-12-05 14:05:14.674874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.331 [2024-12-05 14:05:14.674898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.331 [2024-12-05 14:05:14.693844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.331 [2024-12-05 14:05:14.693868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.331 [2024-12-05 14:05:14.704373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.331 [2024-12-05 14:05:14.704412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.331 [2024-12-05 14:05:14.719818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.331 [2024-12-05 14:05:14.719843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.331 [2024-12-05 14:05:14.737213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.331 [2024-12-05 14:05:14.737238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.331 [2024-12-05 14:05:14.746692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.332 [2024-12-05 14:05:14.746734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.332 [2024-12-05 14:05:14.758765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.332 [2024-12-05 14:05:14.758790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.332 [2024-12-05 14:05:14.773992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.332 [2024-12-05 14:05:14.774017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.332 [2024-12-05 14:05:14.783188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.332 [2024-12-05 14:05:14.783213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.332 [2024-12-05 14:05:14.794842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.332 [2024-12-05 14:05:14.794867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.332 [2024-12-05 14:05:14.810720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.332 [2024-12-05 14:05:14.810746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.332 [2024-12-05 14:05:14.820001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.332 [2024-12-05 14:05:14.820027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.332 [2024-12-05 14:05:14.834359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.332 [2024-12-05 14:05:14.834391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.332 [2024-12-05 14:05:14.843892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.332 [2024-12-05 14:05:14.843916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.590 [2024-12-05 14:05:14.859034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.590 [2024-12-05 14:05:14.859074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.590 [2024-12-05 14:05:14.868463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.590 [2024-12-05 14:05:14.868489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.590 [2024-12-05 14:05:14.882682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.590 [2024-12-05 14:05:14.882722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.590 [2024-12-05 14:05:14.892222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.590 [2024-12-05 14:05:14.892262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.590 [2024-12-05 14:05:14.906460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.590 [2024-12-05 14:05:14.906486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.590 [2024-12-05 14:05:14.915801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.590 [2024-12-05 14:05:14.915826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.590 [2024-12-05 14:05:14.930425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.590 [2024-12-05 14:05:14.930450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.590 [2024-12-05 14:05:14.940214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.590 [2024-12-05 14:05:14.940240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.590 [2024-12-05 14:05:14.954171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.590 [2024-12-05 14:05:14.954197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.590 [2024-12-05 14:05:14.963835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.590 [2024-12-05 14:05:14.963862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.590 [2024-12-05 14:05:14.978712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.590 [2024-12-05 14:05:14.978738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.590 [2024-12-05 14:05:14.988611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.590 [2024-12-05 14:05:14.988638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.590 [2024-12-05 14:05:15.000170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.590 [2024-12-05 14:05:15.000195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.591 [2024-12-05 14:05:15.015457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.591 [2024-12-05 14:05:15.015484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.591 [2024-12-05 14:05:15.033225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.591 [2024-12-05 14:05:15.033252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.591 [2024-12-05 14:05:15.042630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.591 [2024-12-05 14:05:15.042657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.591 [2024-12-05 14:05:15.059038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.591 [2024-12-05 14:05:15.059063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.591 [2024-12-05 14:05:15.077299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.591 [2024-12-05 14:05:15.077331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.591 [2024-12-05 14:05:15.087855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.591 [2024-12-05 14:05:15.087879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.591 [2024-12-05 14:05:15.102984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.591 [2024-12-05 14:05:15.103008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.591 [2024-12-05 14:05:15.112313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.591 [2024-12-05 14:05:15.112337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.850 [2024-12-05 14:05:15.127235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.850 [2024-12-05 14:05:15.127276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.850 [2024-12-05 14:05:15.140917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.850 [2024-12-05 14:05:15.140943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.850 [2024-12-05 14:05:15.154610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.850 [2024-12-05 14:05:15.154637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.850 [2024-12-05 14:05:15.164067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.850 [2024-12-05 14:05:15.164090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.850 [2024-12-05 14:05:15.178322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.850 [2024-12-05 14:05:15.178346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.850 [2024-12-05 14:05:15.188338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.850 [2024-12-05 14:05:15.188378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.850 [2024-12-05 14:05:15.202723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.850 [2024-12-05 14:05:15.202746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.850 [2024-12-05 14:05:15.212072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.850 [2024-12-05 14:05:15.212096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.851 [2024-12-05 14:05:15.226388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.851 [2024-12-05 14:05:15.226438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.851 [2024-12-05 14:05:15.236966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.851 [2024-12-05 14:05:15.236989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.851 [2024-12-05 14:05:15.247809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.851 [2024-12-05 14:05:15.247832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.851 [2024-12-05 14:05:15.261352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.851 [2024-12-05 14:05:15.261378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.851 [2024-12-05 14:05:15.271100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.851 [2024-12-05 14:05:15.271124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.851 [2024-12-05 14:05:15.282960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.851 [2024-12-05 14:05:15.282983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.851 [2024-12-05 14:05:15.299659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.851 [2024-12-05 14:05:15.299686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.851 [2024-12-05 14:05:15.317478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.851 [2024-12-05 14:05:15.317514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.851 [2024-12-05 14:05:15.328500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.851 [2024-12-05 14:05:15.328527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.851 [2024-12-05 14:05:15.343835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.851 [2024-12-05 14:05:15.343860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.851 [2024-12-05 14:05:15.359018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.851 [2024-12-05 14:05:15.359060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.851 [2024-12-05 14:05:15.368932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.851 [2024-12-05 14:05:15.368956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.380447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.108 [2024-12-05 14:05:15.380489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.395049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.108 [2024-12-05 14:05:15.395091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.404347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.108 [2024-12-05 14:05:15.404372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.418824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.108 [2024-12-05 14:05:15.418848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.435020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.108 [2024-12-05 14:05:15.435062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.444289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.108 [2024-12-05 14:05:15.444315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.458478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.108 [2024-12-05 14:05:15.458504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.467685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.108 [2024-12-05 14:05:15.467731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.481913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.108 [2024-12-05 14:05:15.481938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.492478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.108 [2024-12-05 14:05:15.492504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.507021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.108 [2024-12-05 14:05:15.507047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.516690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.108 [2024-12-05 14:05:15.516733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.528360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.108 [2024-12-05 14:05:15.528400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.542771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.108 [2024-12-05 14:05:15.542798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.108 [2024-12-05 14:05:15.551875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.109 [2024-12-05 14:05:15.551909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.109 [2024-12-05 14:05:15.565851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.109 [2024-12-05 14:05:15.565877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.109 11741.67 IOPS, 91.73 MiB/s [2024-12-05T13:05:15.635Z] [2024-12-05 14:05:15.575736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.109 [2024-12-05 14:05:15.575776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.109 [2024-12-05 14:05:15.591979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.109 [2024-12-05 14:05:15.592003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.109 [2024-12-05 14:05:15.605757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.109 [2024-12-05 14:05:15.605784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.109 [2024-12-05 14:05:15.615372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.109 [2024-12-05 14:05:15.615412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.109 [2024-12-05 14:05:15.627248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.109 [2024-12-05 14:05:15.627274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.642514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.642542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.651899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.651924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.666293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.666332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.675799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.675824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.689510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.689536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.698865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.698889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.710525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.710551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.721527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.721554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.732544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.732570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.746535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.746561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.755697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.755738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.771496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.771538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.787243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.787271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.804605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.804646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.814526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.814553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.830309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.830333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.839965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.840006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.854129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.854155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.864190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.864216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.878398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.878447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.368 [2024-12-05 14:05:15.887485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.368 [2024-12-05 14:05:15.887512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.626 [2024-12-05 14:05:15.901996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.626 [2024-12-05 14:05:15.902022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.626 [2024-12-05 14:05:15.912006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.626 [2024-12-05 14:05:15.912030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.626 [2024-12-05 14:05:15.926011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.626 [2024-12-05 14:05:15.926035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.626 [2024-12-05 14:05:15.935102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.626 [2024-12-05 14:05:15.935128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.626 [2024-12-05 14:05:15.946524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.626 [2024-12-05 14:05:15.946551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.626 [2024-12-05 14:05:15.957284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.626 [2024-12-05 14:05:15.957310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.626 [2024-12-05 14:05:15.968153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.626 [2024-12-05 14:05:15.968192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.626 [2024-12-05 14:05:15.982801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.626 [2024-12-05 14:05:15.982828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.626 [2024-12-05 14:05:15.992128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.626 [2024-12-05 14:05:15.992153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.626 [2024-12-05 14:05:16.007804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.626 [2024-12-05 14:05:16.007828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.627 [2024-12-05 14:05:16.023202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.627 [2024-12-05 14:05:16.023227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.627 [2024-12-05 14:05:16.033032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.627 [2024-12-05 14:05:16.033057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.627 [2024-12-05 14:05:16.044821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.627 [2024-12-05 14:05:16.044846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.627 [2024-12-05 14:05:16.055882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.627 [2024-12-05 14:05:16.055906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.627 [2024-12-05 14:05:16.071523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.627 [2024-12-05 14:05:16.071549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.627 [2024-12-05 14:05:16.081180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.627 [2024-12-05 14:05:16.081219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.627 [2024-12-05 14:05:16.092746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.627 [2024-12-05 14:05:16.092786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.627 [2024-12-05 14:05:16.103170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.627 [2024-12-05 14:05:16.103210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.627 [2024-12-05 14:05:16.117297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.627 [2024-12-05 14:05:16.117324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.627 [2024-12-05 14:05:16.126658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.627 [2024-12-05 14:05:16.126695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.627 [2024-12-05 14:05:16.138243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.627 [2024-12-05 14:05:16.138268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.627 [2024-12-05 14:05:16.148479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.627 [2024-12-05 14:05:16.148505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.164047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.164072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.179228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.179255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.188603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.188645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.203021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.203045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.219673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.219718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.235374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.235402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.245051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.245077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.256886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.256911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.267915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.267940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.283227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.283254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.292442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.292469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.306549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.306576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.316026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.316052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.330185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.330225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.339465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.339492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.351471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.351497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.367148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.367171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.376504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.376531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.391001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.391024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.886 [2024-12-05 14:05:16.400854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.886 [2024-12-05 14:05:16.400879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.413185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.413211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.424076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.424114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.440113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.440138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.454897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.454922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.464296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.464321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.478880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.478914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.489241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.489264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.499820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.499844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.515180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.515218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.524347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.524387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.539114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.539154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.553332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.553371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.562693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.562719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 11750.50 IOPS, 91.80 MiB/s [2024-12-05T13:05:16.672Z] [2024-12-05 14:05:16.574268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.574292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.584891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.584916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.595874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.595913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.610674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.610701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.619963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.619986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.635522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.635549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.649499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.649527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.658738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.658763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.146 [2024-12-05 14:05:16.671017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.146 [2024-12-05 14:05:16.671042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.686321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.686346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.695748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.695789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.711509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.711542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.729523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.729549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.740437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.740465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.753763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.753790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.763641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.763667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.779754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.779794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.795317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.795343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.805235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.805260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.816980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.817005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.827471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.827497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.842118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.842144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.850970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.850995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.862746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.862769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.878697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.878735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.888470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.888495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.902958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.902982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.407 [2024-12-05 14:05:16.920322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.407 [2024-12-05 14:05:16.920347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:16.935027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:16.935055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:16.944978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:16.945003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:16.956835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:16.956867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:16.967571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:16.967596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:16.983185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:16.983210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:16.992415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:16.992450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:17.006673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:17.006715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:17.016151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:17.016174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:17.031909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:17.031933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:17.047637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:17.047663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:17.065680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:17.065707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:17.075331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:17.075356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:17.091294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:17.091318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:17.109383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:17.109433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:17.119261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:17.119285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:17.131019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:17.131043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:17.146163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:17.146187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:17.155284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:17.155308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:17.167035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:17.167072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.668 [2024-12-05 14:05:17.183660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.668 [2024-12-05 14:05:17.183687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.201349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.201373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.212053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.212078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.226669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.226711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.236208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.236233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.250748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.250788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.260772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.260799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.272783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.272809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.285281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.285309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.295273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.295297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.307043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.307067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.321562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.321589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.330848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.330874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.346689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.346729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.355847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.355871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.371432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.371480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.381013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.381037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.392910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.392934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.403620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.403646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.419194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.419218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.428933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.428970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.440734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.440758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.926 [2024-12-05 14:05:17.451570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.926 [2024-12-05 14:05:17.451595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.463906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.463932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.479311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.479335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.488496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.488522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.503133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.503156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.512807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.512831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.524300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.524326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.539159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.539184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.548838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.548863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.560633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.560660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.571502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.571528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 11745.40 IOPS, 91.76 MiB/s [2024-12-05T13:05:17.710Z] [2024-12-05 14:05:17.581379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.581428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 00:33:46.184 Latency(us) 00:33:46.184 [2024-12-05T13:05:17.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.184 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:46.184 Nvme1n1 : 5.01 11747.43 91.78 0.00 0.00 10882.74 2924.85 19320.98 00:33:46.184 [2024-12-05T13:05:17.710Z] =================================================================================================================== 00:33:46.184 [2024-12-05T13:05:17.710Z] Total : 11747.43 91.78 0.00 0.00 10882.74 2924.85 19320.98 00:33:46.184 [2024-12-05 14:05:17.589374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.589412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.597373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.597395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.605401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.605448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.613455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.613501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.621459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.621504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.629457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.629502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.637452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.637496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.645456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.645501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.653451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.653495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.661454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.661497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.669462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.669507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.677459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.677504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.685470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.685516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.693457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.693498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.701455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.701498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.184 [2024-12-05 14:05:17.709464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.184 [2024-12-05 14:05:17.709510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.442 [2024-12-05 14:05:17.717386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.442 [2024-12-05 14:05:17.717434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.443 [2024-12-05 14:05:17.725371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.443 [2024-12-05 14:05:17.725391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.443 [2024-12-05 14:05:17.733371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.443 [2024-12-05 14:05:17.733390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.443 [2024-12-05 14:05:17.741371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.443 [2024-12-05 14:05:17.741390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.443 [2024-12-05 14:05:17.749431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.443 [2024-12-05 14:05:17.749468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.443 [2024-12-05 14:05:17.757456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.443 [2024-12-05 14:05:17.757511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.443 [2024-12-05 14:05:17.765448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.443 [2024-12-05 14:05:17.765485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.443 [2024-12-05 14:05:17.773372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.443 [2024-12-05 14:05:17.773390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.443 [2024-12-05 14:05:17.781370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.443 [2024-12-05 14:05:17.781389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.443 [2024-12-05 14:05:17.789368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.443 [2024-12-05 14:05:17.789386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2394513) - No such process 00:33:46.443 14:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2394513 00:33:46.443 14:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:46.443 14:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.443 14:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.443 14:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.443 14:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:46.443 14:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.443 14:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.443 delay0 00:33:46.443 14:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.443 14:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:46.443 14:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.443 14:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.443 14:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.443 14:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:46.443 [2024-12-05 14:05:17.908192] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:54.577 [2024-12-05 14:05:25.101860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c903e0 is same with the state(6) to be set 00:33:54.577 Initializing NVMe Controllers 00:33:54.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:54.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:54.577 Initialization complete. Launching workers. 00:33:54.577 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 231, failed: 22207 00:33:54.577 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22319, failed to submit 119 00:33:54.577 success 22248, unsuccessful 71, failed 0 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:54.577 rmmod nvme_tcp 00:33:54.577 rmmod nvme_fabrics 00:33:54.577 rmmod nvme_keyring 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2393184 ']' 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2393184 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2393184 ']' 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2393184 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2393184 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2393184' 00:33:54.577 killing process with pid 2393184 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2393184 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2393184 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:54.577 14:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.480 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:56.480 00:33:56.480 real 0m28.985s 00:33:56.480 user 0m41.120s 00:33:56.480 sys 0m10.142s 00:33:56.480 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.481 ************************************ 00:33:56.481 END TEST nvmf_zcopy 00:33:56.481 ************************************ 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:56.481 ************************************ 00:33:56.481 START TEST nvmf_nmic 00:33:56.481 ************************************ 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:56.481 * Looking for test storage... 00:33:56.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:56.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.481 --rc genhtml_branch_coverage=1 00:33:56.481 --rc genhtml_function_coverage=1 00:33:56.481 --rc genhtml_legend=1 00:33:56.481 --rc geninfo_all_blocks=1 00:33:56.481 --rc geninfo_unexecuted_blocks=1 00:33:56.481 00:33:56.481 ' 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:56.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.481 --rc genhtml_branch_coverage=1 00:33:56.481 --rc genhtml_function_coverage=1 00:33:56.481 --rc genhtml_legend=1 00:33:56.481 --rc geninfo_all_blocks=1 00:33:56.481 --rc geninfo_unexecuted_blocks=1 00:33:56.481 00:33:56.481 ' 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:56.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.481 --rc genhtml_branch_coverage=1 00:33:56.481 --rc genhtml_function_coverage=1 00:33:56.481 --rc genhtml_legend=1 00:33:56.481 --rc geninfo_all_blocks=1 00:33:56.481 --rc geninfo_unexecuted_blocks=1 00:33:56.481 00:33:56.481 ' 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:56.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.481 --rc genhtml_branch_coverage=1 00:33:56.481 --rc genhtml_function_coverage=1 00:33:56.481 --rc genhtml_legend=1 00:33:56.481 --rc geninfo_all_blocks=1 00:33:56.481 --rc geninfo_unexecuted_blocks=1 00:33:56.481 00:33:56.481 ' 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.481 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:56.482 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:58.383 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:58.383 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:58.383 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:58.384 Found net devices under 0000:09:00.0: cvl_0_0 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:58.384 Found net devices under 0000:09:00.1: cvl_0_1 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:58.384 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:58.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:58.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:33:58.643 00:33:58.643 --- 10.0.0.2 ping statistics --- 00:33:58.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.643 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:58.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:58.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:33:58.643 00:33:58.643 --- 10.0.0.1 ping statistics --- 00:33:58.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.643 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:58.643 14:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:58.643 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:58.643 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:58.643 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:58.643 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:58.643 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2398018 00:33:58.643 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:58.643 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2398018 00:33:58.643 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2398018 ']' 00:33:58.643 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.643 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:58.643 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:58.643 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:58.643 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:58.643 [2024-12-05 14:05:30.056302] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:58.643 [2024-12-05 14:05:30.057528] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:33:58.643 [2024-12-05 14:05:30.057596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:58.643 [2024-12-05 14:05:30.130784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:58.902 [2024-12-05 14:05:30.189106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:58.902 [2024-12-05 14:05:30.189157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:58.902 [2024-12-05 14:05:30.189185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:58.902 [2024-12-05 14:05:30.189196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:58.902 [2024-12-05 14:05:30.189206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:58.902 [2024-12-05 14:05:30.190885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.902 [2024-12-05 14:05:30.190931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:58.902 [2024-12-05 14:05:30.191082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:58.902 [2024-12-05 14:05:30.191085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:58.902 [2024-12-05 14:05:30.277909] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:58.902 [2024-12-05 14:05:30.278145] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:58.902 [2024-12-05 14:05:30.278451] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:58.902 [2024-12-05 14:05:30.279025] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:58.902 [2024-12-05 14:05:30.279243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:58.902 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 [2024-12-05 14:05:30.331814] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 Malloc0 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 [2024-12-05 14:05:30.399984] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:58.903 test case1: single bdev can't be used in multiple subsystems 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 [2024-12-05 14:05:30.423742] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:58.903 [2024-12-05 14:05:30.423787] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:58.903 [2024-12-05 14:05:30.423803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.903 request: 00:33:58.903 { 00:33:58.903 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:58.903 "namespace": { 00:33:58.903 "bdev_name": "Malloc0", 00:33:59.162 "no_auto_visible": false, 00:33:59.162 "hide_metadata": false 00:33:59.162 }, 00:33:59.162 "method": "nvmf_subsystem_add_ns", 00:33:59.162 "req_id": 1 00:33:59.162 } 00:33:59.162 Got JSON-RPC error response 00:33:59.162 response: 00:33:59.162 { 00:33:59.162 "code": -32602, 00:33:59.162 "message": "Invalid parameters" 00:33:59.162 } 00:33:59.162 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:59.162 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:59.162 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:59.162 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:59.162 Adding namespace failed - expected result. 00:33:59.162 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:59.162 test case2: host connect to nvmf target in multiple paths 00:33:59.162 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:59.162 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.162 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:59.162 [2024-12-05 14:05:30.431867] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:59.162 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.162 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:59.162 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:59.420 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:59.420 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:33:59.420 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:59.420 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:59.420 14:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:01.954 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:01.954 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:01.954 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:01.954 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:01.954 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:01.954 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:01.954 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:01.954 [global] 00:34:01.954 thread=1 00:34:01.954 invalidate=1 00:34:01.954 rw=write 00:34:01.954 time_based=1 00:34:01.954 runtime=1 00:34:01.954 ioengine=libaio 00:34:01.954 direct=1 00:34:01.954 bs=4096 00:34:01.954 iodepth=1 00:34:01.954 norandommap=0 00:34:01.954 numjobs=1 00:34:01.954 00:34:01.954 verify_dump=1 00:34:01.954 verify_backlog=512 00:34:01.954 verify_state_save=0 00:34:01.954 do_verify=1 00:34:01.954 verify=crc32c-intel 00:34:01.954 [job0] 00:34:01.954 filename=/dev/nvme0n1 00:34:01.954 Could not set queue depth (nvme0n1) 00:34:01.954 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:01.954 fio-3.35 00:34:01.954 Starting 1 thread 00:34:02.891 00:34:02.891 job0: (groupid=0, jobs=1): err= 0: pid=2398401: Thu Dec 5 14:05:34 2024 00:34:02.891 read: IOPS=23, BW=92.5KiB/s (94.7kB/s)(96.0KiB/1038msec) 00:34:02.891 slat (nsec): min=6603, max=34869, avg=28036.58, stdev=9402.18 00:34:02.891 clat (usec): min=227, max=41010, avg=39250.91, stdev=8312.19 00:34:02.891 lat (usec): min=261, max=41032, avg=39278.95, stdev=8310.88 00:34:02.891 clat percentiles (usec): 00:34:02.891 | 1.00th=[ 229], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:02.891 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:02.891 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:02.891 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:02.891 | 99.99th=[41157] 00:34:02.891 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:34:02.891 slat (nsec): min=6596, max=45304, avg=10893.49, stdev=4560.22 00:34:02.891 clat (usec): min=139, max=922, avg=170.74, stdev=36.23 00:34:02.891 lat (usec): min=147, max=932, avg=181.64, stdev=37.14 00:34:02.891 clat percentiles (usec): 00:34:02.891 | 1.00th=[ 143], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 157], 00:34:02.891 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:34:02.891 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 194], 00:34:02.891 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 922], 99.95th=[ 922], 00:34:02.891 | 99.99th=[ 922] 00:34:02.891 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:02.891 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:02.891 lat (usec) : 250=95.34%, 500=0.19%, 1000=0.19% 00:34:02.891 lat (msec) : 50=4.29% 00:34:02.891 cpu : usr=0.29%, sys=0.68%, ctx=538, majf=0, minf=1 00:34:02.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.891 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:02.891 00:34:02.891 Run status group 0 (all jobs): 00:34:02.891 READ: bw=92.5KiB/s (94.7kB/s), 92.5KiB/s-92.5KiB/s (94.7kB/s-94.7kB/s), io=96.0KiB (98.3kB), run=1038-1038msec 00:34:02.891 WRITE: bw=1973KiB/s (2020kB/s), 1973KiB/s-1973KiB/s (2020kB/s-2020kB/s), io=2048KiB (2097kB), run=1038-1038msec 00:34:02.891 00:34:02.891 Disk stats (read/write): 00:34:02.891 nvme0n1: ios=69/512, merge=0/0, ticks=1740/81, in_queue=1821, util=99.00% 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:02.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:02.891 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:02.891 rmmod nvme_tcp 00:34:03.148 rmmod nvme_fabrics 00:34:03.148 rmmod nvme_keyring 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2398018 ']' 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2398018 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2398018 ']' 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2398018 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2398018 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2398018' 00:34:03.148 killing process with pid 2398018 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2398018 00:34:03.148 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2398018 00:34:03.405 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:03.405 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:03.405 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:03.405 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:03.405 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:03.405 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:03.405 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:03.405 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:03.405 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:03.405 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.405 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:03.405 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.310 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:05.310 00:34:05.310 real 0m9.224s 00:34:05.310 user 0m17.284s 00:34:05.310 sys 0m3.256s 00:34:05.310 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:05.310 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.310 ************************************ 00:34:05.310 END TEST nvmf_nmic 00:34:05.310 ************************************ 00:34:05.310 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:05.310 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:05.310 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:05.310 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:05.310 ************************************ 00:34:05.310 START TEST nvmf_fio_target 00:34:05.310 ************************************ 00:34:05.310 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:05.569 * Looking for test storage... 00:34:05.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:05.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.569 --rc genhtml_branch_coverage=1 00:34:05.569 --rc genhtml_function_coverage=1 00:34:05.569 --rc genhtml_legend=1 00:34:05.569 --rc geninfo_all_blocks=1 00:34:05.569 --rc geninfo_unexecuted_blocks=1 00:34:05.569 00:34:05.569 ' 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:05.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.569 --rc genhtml_branch_coverage=1 00:34:05.569 --rc genhtml_function_coverage=1 00:34:05.569 --rc genhtml_legend=1 00:34:05.569 --rc geninfo_all_blocks=1 00:34:05.569 --rc geninfo_unexecuted_blocks=1 00:34:05.569 00:34:05.569 ' 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:05.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.569 --rc genhtml_branch_coverage=1 00:34:05.569 --rc genhtml_function_coverage=1 00:34:05.569 --rc genhtml_legend=1 00:34:05.569 --rc geninfo_all_blocks=1 00:34:05.569 --rc geninfo_unexecuted_blocks=1 00:34:05.569 00:34:05.569 ' 00:34:05.569 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:05.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.569 --rc genhtml_branch_coverage=1 00:34:05.569 --rc genhtml_function_coverage=1 00:34:05.569 --rc genhtml_legend=1 00:34:05.569 --rc geninfo_all_blocks=1 00:34:05.569 --rc geninfo_unexecuted_blocks=1 00:34:05.569 00:34:05.569 ' 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:05.570 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:08.108 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:08.108 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:08.108 Found net devices under 0000:09:00.0: cvl_0_0 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.108 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:08.109 Found net devices under 0000:09:00.1: cvl_0_1 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:08.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:08.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:34:08.109 00:34:08.109 --- 10.0.0.2 ping statistics --- 00:34:08.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.109 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:08.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:08.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:34:08.109 00:34:08.109 --- 10.0.0.1 ping statistics --- 00:34:08.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.109 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2400599 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2400599 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2400599 ']' 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:08.109 [2024-12-05 14:05:39.266821] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:08.109 [2024-12-05 14:05:39.267911] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:34:08.109 [2024-12-05 14:05:39.267975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.109 [2024-12-05 14:05:39.338188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:08.109 [2024-12-05 14:05:39.394987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:08.109 [2024-12-05 14:05:39.395042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:08.109 [2024-12-05 14:05:39.395071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:08.109 [2024-12-05 14:05:39.395083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:08.109 [2024-12-05 14:05:39.395092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:08.109 [2024-12-05 14:05:39.396681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.109 [2024-12-05 14:05:39.396739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:08.109 [2024-12-05 14:05:39.396789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:08.109 [2024-12-05 14:05:39.396792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:08.109 [2024-12-05 14:05:39.494985] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:08.109 [2024-12-05 14:05:39.495211] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:08.109 [2024-12-05 14:05:39.495499] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:08.109 [2024-12-05 14:05:39.496186] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:08.109 [2024-12-05 14:05:39.496433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:08.109 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:08.368 [2024-12-05 14:05:39.797515] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:08.368 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:08.625 14:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:08.625 14:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:09.192 14:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:09.192 14:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:09.450 14:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:09.450 14:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:09.709 14:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:09.709 14:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:09.967 14:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:10.226 14:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:10.226 14:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:10.484 14:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:10.484 14:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:10.763 14:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:10.763 14:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:11.080 14:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:11.343 14:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:11.343 14:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:11.601 14:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:11.601 14:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:12.168 14:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:12.426 [2024-12-05 14:05:43.717665] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.426 14:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:12.684 14:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:12.943 14:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:13.202 14:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:13.202 14:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:13.202 14:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:13.202 14:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:13.202 14:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:13.202 14:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:15.105 14:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:15.105 14:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:15.105 14:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:15.105 14:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:15.105 14:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:15.105 14:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:15.105 14:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:15.105 [global] 00:34:15.105 thread=1 00:34:15.105 invalidate=1 00:34:15.105 rw=write 00:34:15.105 time_based=1 00:34:15.105 runtime=1 00:34:15.105 ioengine=libaio 00:34:15.105 direct=1 00:34:15.105 bs=4096 00:34:15.105 iodepth=1 00:34:15.105 norandommap=0 00:34:15.105 numjobs=1 00:34:15.105 00:34:15.105 verify_dump=1 00:34:15.105 verify_backlog=512 00:34:15.105 verify_state_save=0 00:34:15.105 do_verify=1 00:34:15.105 verify=crc32c-intel 00:34:15.105 [job0] 00:34:15.105 filename=/dev/nvme0n1 00:34:15.105 [job1] 00:34:15.105 filename=/dev/nvme0n2 00:34:15.105 [job2] 00:34:15.105 filename=/dev/nvme0n3 00:34:15.105 [job3] 00:34:15.105 filename=/dev/nvme0n4 00:34:15.363 Could not set queue depth (nvme0n1) 00:34:15.363 Could not set queue depth (nvme0n2) 00:34:15.363 Could not set queue depth (nvme0n3) 00:34:15.363 Could not set queue depth (nvme0n4) 00:34:15.363 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:15.363 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:15.363 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:15.363 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:15.363 fio-3.35 00:34:15.363 Starting 4 threads 00:34:16.741 00:34:16.741 job0: (groupid=0, jobs=1): err= 0: pid=2401547: Thu Dec 5 14:05:48 2024 00:34:16.741 read: IOPS=20, BW=82.8KiB/s (84.8kB/s)(84.0KiB/1014msec) 00:34:16.741 slat (nsec): min=9543, max=14635, avg=13973.76, stdev=1047.09 00:34:16.741 clat (usec): min=40787, max=41007, avg=40968.47, stdev=43.96 00:34:16.741 lat (usec): min=40796, max=41021, avg=40982.44, stdev=44.94 00:34:16.741 clat percentiles (usec): 00:34:16.741 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:16.741 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:16.741 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:16.741 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:16.741 | 99.99th=[41157] 00:34:16.741 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:34:16.741 slat (nsec): min=8542, max=40702, avg=10864.21, stdev=2632.99 00:34:16.741 clat (usec): min=145, max=1026, avg=285.54, stdev=95.15 00:34:16.741 lat (usec): min=157, max=1036, avg=296.41, stdev=95.38 00:34:16.741 clat percentiles (usec): 00:34:16.741 | 1.00th=[ 172], 5.00th=[ 215], 10.00th=[ 229], 20.00th=[ 241], 00:34:16.741 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 262], 00:34:16.741 | 70.00th=[ 277], 80.00th=[ 330], 90.00th=[ 363], 95.00th=[ 404], 00:34:16.741 | 99.00th=[ 775], 99.50th=[ 906], 99.90th=[ 1029], 99.95th=[ 1029], 00:34:16.741 | 99.99th=[ 1029] 00:34:16.741 bw ( KiB/s): min= 4096, max= 4096, per=48.52%, avg=4096.00, stdev= 0.00, samples=1 00:34:16.741 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:16.741 lat (usec) : 250=36.21%, 500=57.79%, 750=0.56%, 1000=1.13% 00:34:16.741 lat (msec) : 2=0.38%, 50=3.94% 00:34:16.741 cpu : usr=0.39%, sys=0.69%, ctx=534, majf=0, minf=1 00:34:16.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.741 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:16.741 job1: (groupid=0, jobs=1): err= 0: pid=2401548: Thu Dec 5 14:05:48 2024 00:34:16.741 read: IOPS=334, BW=1340KiB/s (1372kB/s)(1356KiB/1012msec) 00:34:16.741 slat (nsec): min=6004, max=56565, avg=14017.77, stdev=7034.55 00:34:16.741 clat (usec): min=218, max=41055, avg=2642.57, stdev=9501.72 00:34:16.741 lat (usec): min=225, max=41062, avg=2656.58, stdev=9502.67 00:34:16.741 clat percentiles (usec): 00:34:16.741 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 245], 00:34:16.741 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 258], 00:34:16.741 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 400], 95.00th=[40633], 00:34:16.741 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:16.741 | 99.99th=[41157] 00:34:16.741 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:34:16.742 slat (nsec): min=7848, max=43616, avg=9444.33, stdev=2180.42 00:34:16.742 clat (usec): min=152, max=1222, avg=200.95, stdev=63.27 00:34:16.742 lat (usec): min=161, max=1233, avg=210.40, stdev=63.45 00:34:16.742 clat percentiles (usec): 00:34:16.742 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 180], 00:34:16.742 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:34:16.742 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 225], 95.00th=[ 314], 00:34:16.742 | 99.00th=[ 420], 99.50th=[ 461], 99.90th=[ 1221], 99.95th=[ 1221], 00:34:16.742 | 99.99th=[ 1221] 00:34:16.742 bw ( KiB/s): min= 4096, max= 4096, per=48.52%, avg=4096.00, stdev= 0.00, samples=1 00:34:16.742 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:16.742 lat (usec) : 250=71.68%, 500=25.73%, 750=0.12% 00:34:16.742 lat (msec) : 2=0.12%, 50=2.35% 00:34:16.742 cpu : usr=0.59%, sys=1.38%, ctx=852, majf=0, minf=1 00:34:16.742 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.742 issued rwts: total=339,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.742 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:16.742 job2: (groupid=0, jobs=1): err= 0: pid=2401549: Thu Dec 5 14:05:48 2024 00:34:16.742 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:34:16.742 slat (nsec): min=7408, max=34851, avg=20760.86, stdev=10464.93 00:34:16.742 clat (usec): min=31619, max=41370, avg=40546.61, stdev=2047.39 00:34:16.742 lat (usec): min=31654, max=41377, avg=40567.37, stdev=2044.15 00:34:16.742 clat percentiles (usec): 00:34:16.742 | 1.00th=[31589], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:16.742 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:16.742 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:16.742 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:16.742 | 99.99th=[41157] 00:34:16.742 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:34:16.742 slat (nsec): min=7529, max=30687, avg=9607.25, stdev=2756.69 00:34:16.742 clat (usec): min=158, max=944, avg=280.52, stdev=71.44 00:34:16.742 lat (usec): min=167, max=952, avg=290.13, stdev=71.89 00:34:16.742 clat percentiles (usec): 00:34:16.742 | 1.00th=[ 176], 5.00th=[ 208], 10.00th=[ 229], 20.00th=[ 243], 00:34:16.742 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:34:16.742 | 70.00th=[ 281], 80.00th=[ 330], 90.00th=[ 363], 95.00th=[ 396], 00:34:16.742 | 99.00th=[ 490], 99.50th=[ 685], 99.90th=[ 947], 99.95th=[ 947], 00:34:16.742 | 99.99th=[ 947] 00:34:16.742 bw ( KiB/s): min= 4096, max= 4096, per=48.52%, avg=4096.00, stdev= 0.00, samples=1 00:34:16.742 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:16.742 lat (usec) : 250=35.46%, 500=59.85%, 750=0.38%, 1000=0.38% 00:34:16.742 lat (msec) : 50=3.94% 00:34:16.742 cpu : usr=0.30%, sys=0.60%, ctx=533, majf=0, minf=2 00:34:16.742 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.742 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.742 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:16.742 job3: (groupid=0, jobs=1): err= 0: pid=2401550: Thu Dec 5 14:05:48 2024 00:34:16.742 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:16.742 slat (nsec): min=12926, max=43232, avg=18604.89, stdev=2811.08 00:34:16.742 clat (usec): min=242, max=41052, avg=1637.56, stdev=7198.33 00:34:16.742 lat (usec): min=261, max=41065, avg=1656.17, stdev=7198.59 00:34:16.742 clat percentiles (usec): 00:34:16.742 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 262], 20.00th=[ 269], 00:34:16.742 | 30.00th=[ 273], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 277], 00:34:16.742 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 318], 95.00th=[ 416], 00:34:16.742 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:16.742 | 99.99th=[41157] 00:34:16.742 write: IOPS=603, BW=2414KiB/s (2472kB/s)(2416KiB/1001msec); 0 zone resets 00:34:16.742 slat (nsec): min=8508, max=47021, avg=12681.77, stdev=5098.09 00:34:16.742 clat (usec): min=164, max=688, avg=230.67, stdev=79.80 00:34:16.742 lat (usec): min=173, max=698, avg=243.35, stdev=80.48 00:34:16.742 clat percentiles (usec): 00:34:16.742 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 182], 00:34:16.742 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:34:16.742 | 70.00th=[ 206], 80.00th=[ 289], 90.00th=[ 367], 95.00th=[ 412], 00:34:16.742 | 99.00th=[ 457], 99.50th=[ 465], 99.90th=[ 693], 99.95th=[ 693], 00:34:16.742 | 99.99th=[ 693] 00:34:16.742 bw ( KiB/s): min= 4096, max= 4096, per=48.52%, avg=4096.00, stdev= 0.00, samples=1 00:34:16.742 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:16.742 lat (usec) : 250=41.22%, 500=56.99%, 750=0.18% 00:34:16.742 lat (msec) : 50=1.61% 00:34:16.742 cpu : usr=1.30%, sys=2.20%, ctx=1117, majf=0, minf=1 00:34:16.742 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.742 issued rwts: total=512,604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.742 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:16.742 00:34:16.742 Run status group 0 (all jobs): 00:34:16.742 READ: bw=3523KiB/s (3607kB/s), 82.8KiB/s-2046KiB/s (84.8kB/s-2095kB/s), io=3572KiB (3658kB), run=1001-1014msec 00:34:16.742 WRITE: bw=8442KiB/s (8644kB/s), 2020KiB/s-2414KiB/s (2068kB/s-2472kB/s), io=8560KiB (8765kB), run=1001-1014msec 00:34:16.742 00:34:16.742 Disk stats (read/write): 00:34:16.742 nvme0n1: ios=68/512, merge=0/0, ticks=842/141, in_queue=983, util=85.97% 00:34:16.742 nvme0n2: ios=371/512, merge=0/0, ticks=913/101, in_queue=1014, util=89.62% 00:34:16.742 nvme0n3: ios=74/512, merge=0/0, ticks=757/141, in_queue=898, util=94.78% 00:34:16.742 nvme0n4: ios=147/512, merge=0/0, ticks=1190/122, in_queue=1312, util=94.43% 00:34:16.742 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:16.742 [global] 00:34:16.742 thread=1 00:34:16.742 invalidate=1 00:34:16.742 rw=randwrite 00:34:16.742 time_based=1 00:34:16.742 runtime=1 00:34:16.742 ioengine=libaio 00:34:16.742 direct=1 00:34:16.742 bs=4096 00:34:16.742 iodepth=1 00:34:16.742 norandommap=0 00:34:16.742 numjobs=1 00:34:16.742 00:34:16.742 verify_dump=1 00:34:16.742 verify_backlog=512 00:34:16.742 verify_state_save=0 00:34:16.742 do_verify=1 00:34:16.742 verify=crc32c-intel 00:34:16.742 [job0] 00:34:16.742 filename=/dev/nvme0n1 00:34:16.742 [job1] 00:34:16.742 filename=/dev/nvme0n2 00:34:16.742 [job2] 00:34:16.742 filename=/dev/nvme0n3 00:34:16.742 [job3] 00:34:16.742 filename=/dev/nvme0n4 00:34:16.742 Could not set queue depth (nvme0n1) 00:34:16.742 Could not set queue depth (nvme0n2) 00:34:16.743 Could not set queue depth (nvme0n3) 00:34:16.743 Could not set queue depth (nvme0n4) 00:34:17.001 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:17.001 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:17.001 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:17.001 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:17.001 fio-3.35 00:34:17.001 Starting 4 threads 00:34:18.375 00:34:18.375 job0: (groupid=0, jobs=1): err= 0: pid=2401902: Thu Dec 5 14:05:49 2024 00:34:18.375 read: IOPS=1557, BW=6231KiB/s (6381kB/s)(6412KiB/1029msec) 00:34:18.375 slat (nsec): min=6046, max=51028, avg=9398.12, stdev=4054.59 00:34:18.375 clat (usec): min=219, max=41018, avg=374.69, stdev=2267.70 00:34:18.375 lat (usec): min=226, max=41033, avg=384.09, stdev=2268.00 00:34:18.375 clat percentiles (usec): 00:34:18.375 | 1.00th=[ 223], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 231], 00:34:18.375 | 30.00th=[ 233], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 245], 00:34:18.375 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:34:18.375 | 99.00th=[ 334], 99.50th=[ 445], 99.90th=[41157], 99.95th=[41157], 00:34:18.375 | 99.99th=[41157] 00:34:18.375 write: IOPS=1990, BW=7961KiB/s (8152kB/s)(8192KiB/1029msec); 0 zone resets 00:34:18.375 slat (nsec): min=7293, max=40512, avg=11216.61, stdev=4955.29 00:34:18.375 clat (usec): min=148, max=488, avg=185.16, stdev=40.39 00:34:18.375 lat (usec): min=158, max=512, avg=196.38, stdev=42.60 00:34:18.375 clat percentiles (usec): 00:34:18.375 | 1.00th=[ 155], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 161], 00:34:18.375 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:34:18.375 | 70.00th=[ 180], 80.00th=[ 215], 90.00th=[ 243], 95.00th=[ 265], 00:34:18.375 | 99.00th=[ 347], 99.50th=[ 388], 99.90th=[ 453], 99.95th=[ 474], 00:34:18.375 | 99.99th=[ 490] 00:34:18.375 bw ( KiB/s): min= 8056, max= 8328, per=55.37%, avg=8192.00, stdev=192.33, samples=2 00:34:18.375 iops : min= 2014, max= 2082, avg=2048.00, stdev=48.08, samples=2 00:34:18.375 lat (usec) : 250=81.98%, 500=17.80%, 750=0.03%, 1000=0.03% 00:34:18.375 lat (msec) : 2=0.03%, 50=0.14% 00:34:18.375 cpu : usr=3.02%, sys=4.77%, ctx=3654, majf=0, minf=1 00:34:18.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.375 issued rwts: total=1603,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:18.375 job1: (groupid=0, jobs=1): err= 0: pid=2401903: Thu Dec 5 14:05:49 2024 00:34:18.375 read: IOPS=282, BW=1131KiB/s (1158kB/s)(1132KiB/1001msec) 00:34:18.375 slat (nsec): min=5313, max=35109, avg=7531.14, stdev=4196.83 00:34:18.375 clat (usec): min=224, max=41009, avg=3125.66, stdev=10441.93 00:34:18.375 lat (usec): min=233, max=41024, avg=3133.19, stdev=10444.39 00:34:18.375 clat percentiles (usec): 00:34:18.375 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 241], 00:34:18.375 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:34:18.375 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[41157], 00:34:18.375 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:18.375 | 99.99th=[41157] 00:34:18.375 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:18.375 slat (nsec): min=6685, max=40810, avg=19468.06, stdev=10830.86 00:34:18.375 clat (usec): min=151, max=712, avg=196.27, stdev=46.41 00:34:18.375 lat (usec): min=164, max=723, avg=215.73, stdev=47.11 00:34:18.375 clat percentiles (usec): 00:34:18.375 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 178], 00:34:18.375 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:34:18.375 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 241], 00:34:18.375 | 99.00th=[ 420], 99.50th=[ 578], 99.90th=[ 709], 99.95th=[ 709], 00:34:18.375 | 99.99th=[ 709] 00:34:18.375 bw ( KiB/s): min= 4096, max= 4096, per=27.69%, avg=4096.00, stdev= 0.00, samples=1 00:34:18.375 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:18.375 lat (usec) : 250=78.49%, 500=18.62%, 750=0.38% 00:34:18.375 lat (msec) : 50=2.52% 00:34:18.375 cpu : usr=0.50%, sys=1.70%, ctx=795, majf=0, minf=1 00:34:18.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.375 issued rwts: total=283,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:18.375 job2: (groupid=0, jobs=1): err= 0: pid=2401904: Thu Dec 5 14:05:49 2024 00:34:18.375 read: IOPS=20, BW=82.8KiB/s (84.8kB/s)(84.0KiB/1014msec) 00:34:18.375 slat (nsec): min=14130, max=33332, avg=16900.05, stdev=5580.27 00:34:18.375 clat (usec): min=40989, max=45975, avg=42062.76, stdev=943.78 00:34:18.375 lat (usec): min=41004, max=45993, avg=42079.66, stdev=944.26 00:34:18.375 clat percentiles (usec): 00:34:18.375 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:34:18.375 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:18.375 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:18.375 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:34:18.375 | 99.99th=[45876] 00:34:18.375 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:34:18.375 slat (nsec): min=7038, max=44555, avg=14237.08, stdev=7120.58 00:34:18.375 clat (usec): min=176, max=381, avg=235.73, stdev=26.53 00:34:18.375 lat (usec): min=183, max=407, avg=249.97, stdev=27.54 00:34:18.375 clat percentiles (usec): 00:34:18.375 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 217], 00:34:18.375 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:34:18.375 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 277], 00:34:18.375 | 99.00th=[ 310], 99.50th=[ 343], 99.90th=[ 383], 99.95th=[ 383], 00:34:18.375 | 99.99th=[ 383] 00:34:18.375 bw ( KiB/s): min= 4096, max= 4096, per=27.69%, avg=4096.00, stdev= 0.00, samples=1 00:34:18.375 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:18.375 lat (usec) : 250=71.48%, 500=24.58% 00:34:18.375 lat (msec) : 50=3.94% 00:34:18.375 cpu : usr=0.79%, sys=0.69%, ctx=533, majf=0, minf=1 00:34:18.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.375 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.376 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:18.376 job3: (groupid=0, jobs=1): err= 0: pid=2401905: Thu Dec 5 14:05:49 2024 00:34:18.376 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:18.376 slat (nsec): min=6022, max=35574, avg=7769.87, stdev=3159.37 00:34:18.376 clat (usec): min=187, max=41038, avg=1615.85, stdev=7299.71 00:34:18.376 lat (usec): min=193, max=41054, avg=1623.62, stdev=7301.76 00:34:18.376 clat percentiles (usec): 00:34:18.376 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 239], 00:34:18.376 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:34:18.376 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 285], 95.00th=[ 529], 00:34:18.376 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:18.376 | 99.99th=[41157] 00:34:18.376 write: IOPS=733, BW=2933KiB/s (3003kB/s)(2936KiB/1001msec); 0 zone resets 00:34:18.376 slat (nsec): min=7510, max=41104, avg=14027.31, stdev=7951.05 00:34:18.376 clat (usec): min=162, max=428, avg=210.52, stdev=33.73 00:34:18.376 lat (usec): min=170, max=466, avg=224.55, stdev=36.96 00:34:18.376 clat percentiles (usec): 00:34:18.376 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 184], 00:34:18.376 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 204], 60.00th=[ 212], 00:34:18.376 | 70.00th=[ 221], 80.00th=[ 233], 90.00th=[ 251], 95.00th=[ 269], 00:34:18.376 | 99.00th=[ 322], 99.50th=[ 379], 99.90th=[ 429], 99.95th=[ 429], 00:34:18.376 | 99.99th=[ 429] 00:34:18.376 bw ( KiB/s): min= 4096, max= 4096, per=27.69%, avg=4096.00, stdev= 0.00, samples=1 00:34:18.376 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:18.376 lat (usec) : 250=75.28%, 500=22.39%, 750=0.56%, 1000=0.24% 00:34:18.376 lat (msec) : 2=0.16%, 50=1.36% 00:34:18.376 cpu : usr=0.40%, sys=2.50%, ctx=1247, majf=0, minf=1 00:34:18.376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.376 issued rwts: total=512,734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.376 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:18.376 00:34:18.376 Run status group 0 (all jobs): 00:34:18.376 READ: bw=9403KiB/s (9629kB/s), 82.8KiB/s-6231KiB/s (84.8kB/s-6381kB/s), io=9676KiB (9908kB), run=1001-1029msec 00:34:18.376 WRITE: bw=14.4MiB/s (15.1MB/s), 2020KiB/s-7961KiB/s (2068kB/s-8152kB/s), io=14.9MiB (15.6MB), run=1001-1029msec 00:34:18.376 00:34:18.376 Disk stats (read/write): 00:34:18.376 nvme0n1: ios=1638/2048, merge=0/0, ticks=568/362, in_queue=930, util=97.29% 00:34:18.376 nvme0n2: ios=27/512, merge=0/0, ticks=743/102, in_queue=845, util=86.80% 00:34:18.376 nvme0n3: ios=17/512, merge=0/0, ticks=713/117, in_queue=830, util=89.05% 00:34:18.376 nvme0n4: ios=153/512, merge=0/0, ticks=1669/115, in_queue=1784, util=96.64% 00:34:18.376 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:18.376 [global] 00:34:18.376 thread=1 00:34:18.376 invalidate=1 00:34:18.376 rw=write 00:34:18.376 time_based=1 00:34:18.376 runtime=1 00:34:18.376 ioengine=libaio 00:34:18.376 direct=1 00:34:18.376 bs=4096 00:34:18.376 iodepth=128 00:34:18.376 norandommap=0 00:34:18.376 numjobs=1 00:34:18.376 00:34:18.376 verify_dump=1 00:34:18.376 verify_backlog=512 00:34:18.376 verify_state_save=0 00:34:18.376 do_verify=1 00:34:18.376 verify=crc32c-intel 00:34:18.376 [job0] 00:34:18.376 filename=/dev/nvme0n1 00:34:18.376 [job1] 00:34:18.376 filename=/dev/nvme0n2 00:34:18.376 [job2] 00:34:18.376 filename=/dev/nvme0n3 00:34:18.376 [job3] 00:34:18.376 filename=/dev/nvme0n4 00:34:18.376 Could not set queue depth (nvme0n1) 00:34:18.376 Could not set queue depth (nvme0n2) 00:34:18.376 Could not set queue depth (nvme0n3) 00:34:18.376 Could not set queue depth (nvme0n4) 00:34:18.376 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:18.376 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:18.376 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:18.376 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:18.376 fio-3.35 00:34:18.376 Starting 4 threads 00:34:19.752 00:34:19.752 job0: (groupid=0, jobs=1): err= 0: pid=2402129: Thu Dec 5 14:05:50 2024 00:34:19.752 read: IOPS=4830, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1002msec) 00:34:19.752 slat (usec): min=2, max=10041, avg=93.47, stdev=505.77 00:34:19.752 clat (usec): min=643, max=34710, avg=12633.20, stdev=3580.57 00:34:19.752 lat (usec): min=3808, max=34733, avg=12726.67, stdev=3609.14 00:34:19.752 clat percentiles (usec): 00:34:19.752 | 1.00th=[ 7898], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11469], 00:34:19.752 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:34:19.752 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13435], 95.00th=[15926], 00:34:19.752 | 99.00th=[32375], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:34:19.752 | 99.99th=[34866] 00:34:19.752 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:34:19.752 slat (usec): min=3, max=8428, avg=96.45, stdev=511.15 00:34:19.752 clat (usec): min=7867, max=32134, avg=12807.62, stdev=3105.30 00:34:19.752 lat (usec): min=7892, max=32145, avg=12904.07, stdev=3140.48 00:34:19.752 clat percentiles (usec): 00:34:19.752 | 1.00th=[ 8586], 5.00th=[ 9896], 10.00th=[10945], 20.00th=[11469], 00:34:19.752 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:34:19.752 | 70.00th=[12518], 80.00th=[12649], 90.00th=[14877], 95.00th=[19530], 00:34:19.752 | 99.00th=[26870], 99.50th=[27919], 99.90th=[32113], 99.95th=[32113], 00:34:19.752 | 99.99th=[32113] 00:34:19.752 bw ( KiB/s): min=20480, max=20480, per=27.92%, avg=20480.00, stdev= 0.00, samples=2 00:34:19.752 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:34:19.752 lat (usec) : 750=0.01% 00:34:19.752 lat (msec) : 4=0.08%, 10=5.82%, 20=90.01%, 50=4.08% 00:34:19.752 cpu : usr=7.79%, sys=10.19%, ctx=455, majf=0, minf=1 00:34:19.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:19.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:19.752 issued rwts: total=4840,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.752 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:19.752 job1: (groupid=0, jobs=1): err= 0: pid=2402130: Thu Dec 5 14:05:50 2024 00:34:19.752 read: IOPS=4239, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1004msec) 00:34:19.752 slat (usec): min=3, max=43922, avg=112.12, stdev=867.86 00:34:19.752 clat (usec): min=2267, max=57708, avg=14434.61, stdev=8612.28 00:34:19.752 lat (usec): min=4785, max=57720, avg=14546.73, stdev=8650.96 00:34:19.752 clat percentiles (usec): 00:34:19.752 | 1.00th=[ 7963], 5.00th=[10159], 10.00th=[10814], 20.00th=[11338], 00:34:19.752 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:34:19.752 | 70.00th=[12911], 80.00th=[13435], 90.00th=[16319], 95.00th=[33424], 00:34:19.752 | 99.00th=[55837], 99.50th=[57410], 99.90th=[57410], 99.95th=[57934], 00:34:19.752 | 99.99th=[57934] 00:34:19.752 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:34:19.752 slat (usec): min=4, max=12354, avg=102.80, stdev=549.60 00:34:19.752 clat (usec): min=7681, max=76564, avg=14178.73, stdev=7544.14 00:34:19.752 lat (usec): min=7703, max=81688, avg=14281.52, stdev=7582.66 00:34:19.752 clat percentiles (usec): 00:34:19.752 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:34:19.752 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:34:19.752 | 70.00th=[13042], 80.00th=[13566], 90.00th=[18482], 95.00th=[23200], 00:34:19.752 | 99.00th=[50594], 99.50th=[63177], 99.90th=[69731], 99.95th=[69731], 00:34:19.752 | 99.99th=[77071] 00:34:19.752 bw ( KiB/s): min=16416, max=20480, per=25.15%, avg=18448.00, stdev=2873.68, samples=2 00:34:19.752 iops : min= 4104, max= 5120, avg=4612.00, stdev=718.42, samples=2 00:34:19.752 lat (msec) : 4=0.01%, 10=4.75%, 20=87.50%, 50=5.58%, 100=2.15% 00:34:19.752 cpu : usr=6.78%, sys=10.17%, ctx=443, majf=0, minf=1 00:34:19.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:19.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:19.752 issued rwts: total=4256,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.752 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:19.752 job2: (groupid=0, jobs=1): err= 0: pid=2402131: Thu Dec 5 14:05:50 2024 00:34:19.752 read: IOPS=4200, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1005msec) 00:34:19.752 slat (usec): min=3, max=9465, avg=114.65, stdev=699.58 00:34:19.752 clat (usec): min=544, max=31430, avg=14603.77, stdev=4579.29 00:34:19.752 lat (usec): min=4709, max=31437, avg=14718.41, stdev=4610.33 00:34:19.752 clat percentiles (usec): 00:34:19.752 | 1.00th=[ 5604], 5.00th=[10028], 10.00th=[10814], 20.00th=[11994], 00:34:19.752 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13566], 60.00th=[14091], 00:34:19.752 | 70.00th=[14615], 80.00th=[16188], 90.00th=[18744], 95.00th=[27657], 00:34:19.752 | 99.00th=[30540], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:34:19.753 | 99.99th=[31327] 00:34:19.753 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:34:19.753 slat (usec): min=4, max=27966, avg=102.06, stdev=673.59 00:34:19.753 clat (usec): min=4592, max=38573, avg=14149.86, stdev=4550.65 00:34:19.753 lat (usec): min=4648, max=38616, avg=14251.93, stdev=4571.64 00:34:19.753 clat percentiles (usec): 00:34:19.753 | 1.00th=[ 7373], 5.00th=[ 9765], 10.00th=[11338], 20.00th=[12518], 00:34:19.753 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13435], 60.00th=[13698], 00:34:19.753 | 70.00th=[14091], 80.00th=[14353], 90.00th=[16057], 95.00th=[18482], 00:34:19.753 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:34:19.753 | 99.99th=[38536] 00:34:19.753 bw ( KiB/s): min=18336, max=18512, per=25.11%, avg=18424.00, stdev=124.45, samples=2 00:34:19.753 iops : min= 4584, max= 4628, avg=4606.00, stdev=31.11, samples=2 00:34:19.753 lat (usec) : 750=0.01% 00:34:19.753 lat (msec) : 10=5.22%, 20=88.56%, 50=6.21% 00:34:19.753 cpu : usr=6.87%, sys=9.86%, ctx=389, majf=0, minf=1 00:34:19.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:19.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:19.753 issued rwts: total=4222,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:19.753 job3: (groupid=0, jobs=1): err= 0: pid=2402133: Thu Dec 5 14:05:50 2024 00:34:19.753 read: IOPS=3961, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1002msec) 00:34:19.753 slat (usec): min=2, max=13604, avg=122.61, stdev=724.83 00:34:19.753 clat (usec): min=958, max=29257, avg=15828.09, stdev=4147.10 00:34:19.753 lat (usec): min=2105, max=29264, avg=15950.69, stdev=4153.78 00:34:19.753 clat percentiles (usec): 00:34:19.753 | 1.00th=[ 3359], 5.00th=[10552], 10.00th=[12780], 20.00th=[13435], 00:34:19.753 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14746], 60.00th=[15270], 00:34:19.753 | 70.00th=[16909], 80.00th=[18482], 90.00th=[21103], 95.00th=[24249], 00:34:19.753 | 99.00th=[28181], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:34:19.753 | 99.99th=[29230] 00:34:19.753 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:34:19.753 slat (usec): min=3, max=13300, avg=112.53, stdev=630.77 00:34:19.753 clat (usec): min=2809, max=33718, avg=15612.46, stdev=4576.03 00:34:19.753 lat (usec): min=2815, max=33731, avg=15724.99, stdev=4603.79 00:34:19.753 clat percentiles (usec): 00:34:19.753 | 1.00th=[ 5800], 5.00th=[11600], 10.00th=[11994], 20.00th=[12387], 00:34:19.753 | 30.00th=[12911], 40.00th=[13698], 50.00th=[14222], 60.00th=[15139], 00:34:19.753 | 70.00th=[15795], 80.00th=[18482], 90.00th=[21890], 95.00th=[25822], 00:34:19.753 | 99.00th=[31065], 99.50th=[33424], 99.90th=[33817], 99.95th=[33817], 00:34:19.753 | 99.99th=[33817] 00:34:19.753 bw ( KiB/s): min=16288, max=16513, per=22.36%, avg=16400.50, stdev=159.10, samples=2 00:34:19.753 iops : min= 4072, max= 4128, avg=4100.00, stdev=39.60, samples=2 00:34:19.753 lat (usec) : 1000=0.01% 00:34:19.753 lat (msec) : 4=0.69%, 10=2.11%, 20=81.39%, 50=15.80% 00:34:19.753 cpu : usr=5.09%, sys=9.29%, ctx=366, majf=0, minf=2 00:34:19.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:19.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:19.753 issued rwts: total=3969,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:19.753 00:34:19.753 Run status group 0 (all jobs): 00:34:19.753 READ: bw=67.2MiB/s (70.5MB/s), 15.5MiB/s-18.9MiB/s (16.2MB/s-19.8MB/s), io=67.5MiB (70.8MB), run=1002-1005msec 00:34:19.753 WRITE: bw=71.6MiB/s (75.1MB/s), 16.0MiB/s-20.0MiB/s (16.7MB/s-20.9MB/s), io=72.0MiB (75.5MB), run=1002-1005msec 00:34:19.753 00:34:19.753 Disk stats (read/write): 00:34:19.753 nvme0n1: ios=4146/4223, merge=0/0, ticks=15877/15565, in_queue=31442, util=86.87% 00:34:19.753 nvme0n2: ios=3599/3975, merge=0/0, ticks=13719/12889, in_queue=26608, util=96.95% 00:34:19.753 nvme0n3: ios=3627/4070, merge=0/0, ticks=23048/24044, in_queue=47092, util=98.33% 00:34:19.753 nvme0n4: ios=3595/3584, merge=0/0, ticks=24506/24393, in_queue=48899, util=97.06% 00:34:19.753 14:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:19.753 [global] 00:34:19.753 thread=1 00:34:19.753 invalidate=1 00:34:19.753 rw=randwrite 00:34:19.753 time_based=1 00:34:19.753 runtime=1 00:34:19.753 ioengine=libaio 00:34:19.753 direct=1 00:34:19.753 bs=4096 00:34:19.753 iodepth=128 00:34:19.753 norandommap=0 00:34:19.753 numjobs=1 00:34:19.753 00:34:19.753 verify_dump=1 00:34:19.753 verify_backlog=512 00:34:19.753 verify_state_save=0 00:34:19.753 do_verify=1 00:34:19.753 verify=crc32c-intel 00:34:19.753 [job0] 00:34:19.753 filename=/dev/nvme0n1 00:34:19.753 [job1] 00:34:19.753 filename=/dev/nvme0n2 00:34:19.753 [job2] 00:34:19.753 filename=/dev/nvme0n3 00:34:19.753 [job3] 00:34:19.753 filename=/dev/nvme0n4 00:34:19.753 Could not set queue depth (nvme0n1) 00:34:19.753 Could not set queue depth (nvme0n2) 00:34:19.753 Could not set queue depth (nvme0n3) 00:34:19.753 Could not set queue depth (nvme0n4) 00:34:19.753 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:19.753 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:19.753 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:19.753 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:19.753 fio-3.35 00:34:19.753 Starting 4 threads 00:34:21.133 00:34:21.133 job0: (groupid=0, jobs=1): err= 0: pid=2402365: Thu Dec 5 14:05:52 2024 00:34:21.133 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:34:21.133 slat (usec): min=2, max=15584, avg=88.10, stdev=685.85 00:34:21.133 clat (usec): min=3379, max=48159, avg=13607.91, stdev=5929.20 00:34:21.133 lat (usec): min=3383, max=48163, avg=13696.01, stdev=5964.07 00:34:21.133 clat percentiles (usec): 00:34:21.133 | 1.00th=[ 4113], 5.00th=[ 5145], 10.00th=[ 7308], 20.00th=[10683], 00:34:21.133 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12649], 00:34:21.133 | 70.00th=[12911], 80.00th=[16057], 90.00th=[22414], 95.00th=[27657], 00:34:21.133 | 99.00th=[33162], 99.50th=[33162], 99.90th=[44827], 99.95th=[44827], 00:34:21.133 | 99.99th=[47973] 00:34:21.133 write: IOPS=4985, BW=19.5MiB/s (20.4MB/s)(19.6MiB/1006msec); 0 zone resets 00:34:21.133 slat (usec): min=3, max=10685, avg=87.76, stdev=598.04 00:34:21.133 clat (usec): min=248, max=71511, avg=12925.79, stdev=9206.65 00:34:21.133 lat (usec): min=454, max=71516, avg=13013.55, stdev=9248.70 00:34:21.133 clat percentiles (usec): 00:34:21.133 | 1.00th=[ 1614], 5.00th=[ 2933], 10.00th=[ 5014], 20.00th=[ 9372], 00:34:21.133 | 30.00th=[11207], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:34:21.133 | 70.00th=[12780], 80.00th=[13698], 90.00th=[17433], 95.00th=[25035], 00:34:21.133 | 99.00th=[65799], 99.50th=[66323], 99.90th=[71828], 99.95th=[71828], 00:34:21.133 | 99.99th=[71828] 00:34:21.133 bw ( KiB/s): min=19152, max=19952, per=26.77%, avg=19552.00, stdev=565.69, samples=2 00:34:21.133 iops : min= 4788, max= 4988, avg=4888.00, stdev=141.42, samples=2 00:34:21.133 lat (usec) : 250=0.01%, 500=0.03%, 1000=0.11% 00:34:21.133 lat (msec) : 2=1.23%, 4=3.39%, 10=13.52%, 20=72.37%, 50=8.22% 00:34:21.133 lat (msec) : 100=1.12% 00:34:21.133 cpu : usr=3.98%, sys=8.46%, ctx=350, majf=0, minf=2 00:34:21.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:21.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:21.133 issued rwts: total=4608,5015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.133 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:21.133 job1: (groupid=0, jobs=1): err= 0: pid=2402366: Thu Dec 5 14:05:52 2024 00:34:21.133 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:34:21.133 slat (usec): min=3, max=5279, avg=87.09, stdev=472.79 00:34:21.133 clat (usec): min=7060, max=48566, avg=11862.49, stdev=3931.89 00:34:21.133 lat (usec): min=7065, max=51828, avg=11949.58, stdev=3940.31 00:34:21.133 clat percentiles (usec): 00:34:21.133 | 1.00th=[ 8160], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10421], 00:34:21.133 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11600], 60.00th=[11731], 00:34:21.133 | 70.00th=[11994], 80.00th=[12387], 90.00th=[13304], 95.00th=[13960], 00:34:21.133 | 99.00th=[48497], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:34:21.133 | 99.99th=[48497] 00:34:21.133 write: IOPS=5213, BW=20.4MiB/s (21.4MB/s)(20.4MiB/1004msec); 0 zone resets 00:34:21.133 slat (usec): min=3, max=39055, avg=96.50, stdev=748.15 00:34:21.133 clat (usec): min=3305, max=50477, avg=12721.37, stdev=5105.58 00:34:21.133 lat (usec): min=4105, max=50529, avg=12817.88, stdev=5141.75 00:34:21.133 clat percentiles (usec): 00:34:21.133 | 1.00th=[ 7701], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[10814], 00:34:21.133 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:34:21.133 | 70.00th=[12387], 80.00th=[13173], 90.00th=[14877], 95.00th=[17695], 00:34:21.133 | 99.00th=[48497], 99.50th=[49021], 99.90th=[49546], 99.95th=[50070], 00:34:21.133 | 99.99th=[50594] 00:34:21.133 bw ( KiB/s): min=20480, max=20520, per=28.07%, avg=20500.00, stdev=28.28, samples=2 00:34:21.133 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:34:21.133 lat (msec) : 4=0.01%, 10=11.91%, 20=85.59%, 50=2.48%, 100=0.01% 00:34:21.133 cpu : usr=6.18%, sys=9.77%, ctx=471, majf=0, minf=1 00:34:21.134 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:21.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:21.134 issued rwts: total=5120,5234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.134 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:21.134 job2: (groupid=0, jobs=1): err= 0: pid=2402367: Thu Dec 5 14:05:52 2024 00:34:21.134 read: IOPS=4511, BW=17.6MiB/s (18.5MB/s)(17.8MiB/1010msec) 00:34:21.134 slat (usec): min=2, max=11899, avg=112.61, stdev=834.84 00:34:21.134 clat (usec): min=3993, max=27215, avg=14633.49, stdev=3308.22 00:34:21.134 lat (usec): min=3997, max=27228, avg=14746.10, stdev=3349.74 00:34:21.134 clat percentiles (usec): 00:34:21.134 | 1.00th=[ 8029], 5.00th=[10028], 10.00th=[10945], 20.00th=[11863], 00:34:21.134 | 30.00th=[12780], 40.00th=[13435], 50.00th=[14222], 60.00th=[14615], 00:34:21.134 | 70.00th=[16188], 80.00th=[17433], 90.00th=[19530], 95.00th=[20579], 00:34:21.134 | 99.00th=[22152], 99.50th=[22938], 99.90th=[24511], 99.95th=[26346], 00:34:21.134 | 99.99th=[27132] 00:34:21.134 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:34:21.134 slat (usec): min=3, max=13924, avg=94.32, stdev=739.88 00:34:21.134 clat (usec): min=713, max=39530, avg=13331.15, stdev=3659.84 00:34:21.134 lat (usec): min=718, max=39543, avg=13425.47, stdev=3719.65 00:34:21.134 clat percentiles (usec): 00:34:21.134 | 1.00th=[ 4178], 5.00th=[ 7439], 10.00th=[ 8848], 20.00th=[10814], 00:34:21.134 | 30.00th=[11994], 40.00th=[13042], 50.00th=[13829], 60.00th=[14222], 00:34:21.134 | 70.00th=[14746], 80.00th=[15401], 90.00th=[17171], 95.00th=[18744], 00:34:21.134 | 99.00th=[23987], 99.50th=[25035], 99.90th=[32900], 99.95th=[32900], 00:34:21.134 | 99.99th=[39584] 00:34:21.134 bw ( KiB/s): min=16384, max=20480, per=25.24%, avg=18432.00, stdev=2896.31, samples=2 00:34:21.134 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:34:21.134 lat (usec) : 750=0.03% 00:34:21.134 lat (msec) : 4=0.46%, 10=10.09%, 20=83.25%, 50=6.16% 00:34:21.134 cpu : usr=2.97%, sys=5.45%, ctx=290, majf=0, minf=1 00:34:21.134 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:21.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:21.134 issued rwts: total=4557,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.134 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:21.134 job3: (groupid=0, jobs=1): err= 0: pid=2402368: Thu Dec 5 14:05:52 2024 00:34:21.134 read: IOPS=3387, BW=13.2MiB/s (13.9MB/s)(13.4MiB/1009msec) 00:34:21.134 slat (usec): min=3, max=21380, avg=130.25, stdev=1019.40 00:34:21.134 clat (usec): min=4053, max=41414, avg=16578.24, stdev=5579.19 00:34:21.134 lat (usec): min=4066, max=41421, avg=16708.49, stdev=5639.89 00:34:21.134 clat percentiles (usec): 00:34:21.134 | 1.00th=[ 6915], 5.00th=[10421], 10.00th=[11338], 20.00th=[12125], 00:34:21.134 | 30.00th=[12911], 40.00th=[13435], 50.00th=[14484], 60.00th=[16057], 00:34:21.134 | 70.00th=[19792], 80.00th=[21627], 90.00th=[23200], 95.00th=[27132], 00:34:21.134 | 99.00th=[35390], 99.50th=[38011], 99.90th=[41157], 99.95th=[41157], 00:34:21.134 | 99.99th=[41157] 00:34:21.134 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:34:21.134 slat (usec): min=3, max=13872, avg=144.96, stdev=872.08 00:34:21.134 clat (usec): min=916, max=91650, avg=19860.68, stdev=18339.97 00:34:21.134 lat (usec): min=923, max=91662, avg=20005.64, stdev=18470.69 00:34:21.134 clat percentiles (usec): 00:34:21.134 | 1.00th=[ 4752], 5.00th=[ 7898], 10.00th=[ 9110], 20.00th=[11469], 00:34:21.134 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13829], 60.00th=[14353], 00:34:21.134 | 70.00th=[14484], 80.00th=[16450], 90.00th=[58459], 95.00th=[68682], 00:34:21.134 | 99.00th=[84411], 99.50th=[86508], 99.90th=[90702], 99.95th=[91751], 00:34:21.134 | 99.99th=[91751] 00:34:21.134 bw ( KiB/s): min=10192, max=18480, per=19.63%, avg=14336.00, stdev=5860.50, samples=2 00:34:21.134 iops : min= 2548, max= 4620, avg=3584.00, stdev=1465.13, samples=2 00:34:21.134 lat (usec) : 1000=0.11% 00:34:21.134 lat (msec) : 4=0.17%, 10=7.36%, 20=70.61%, 50=15.97%, 100=5.78% 00:34:21.134 cpu : usr=4.66%, sys=7.44%, ctx=336, majf=0, minf=1 00:34:21.134 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:34:21.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:21.134 issued rwts: total=3418,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.134 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:21.134 00:34:21.134 Run status group 0 (all jobs): 00:34:21.134 READ: bw=68.5MiB/s (71.8MB/s), 13.2MiB/s-19.9MiB/s (13.9MB/s-20.9MB/s), io=69.2MiB (72.5MB), run=1004-1010msec 00:34:21.134 WRITE: bw=71.3MiB/s (74.8MB/s), 13.9MiB/s-20.4MiB/s (14.5MB/s-21.4MB/s), io=72.0MiB (75.5MB), run=1004-1010msec 00:34:21.134 00:34:21.134 Disk stats (read/write): 00:34:21.134 nvme0n1: ios=4089/4096, merge=0/0, ticks=40572/36913, in_queue=77485, util=98.30% 00:34:21.134 nvme0n2: ios=4146/4608, merge=0/0, ticks=17965/20423, in_queue=38388, util=98.17% 00:34:21.134 nvme0n3: ios=3764/4096, merge=0/0, ticks=34743/37569, in_queue=72312, util=90.94% 00:34:21.134 nvme0n4: ios=2607/3055, merge=0/0, ticks=42476/62464, in_queue=104940, util=97.06% 00:34:21.134 14:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:21.134 14:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2402502 00:34:21.134 14:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:21.134 14:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:21.134 [global] 00:34:21.134 thread=1 00:34:21.134 invalidate=1 00:34:21.134 rw=read 00:34:21.134 time_based=1 00:34:21.134 runtime=10 00:34:21.134 ioengine=libaio 00:34:21.134 direct=1 00:34:21.134 bs=4096 00:34:21.134 iodepth=1 00:34:21.134 norandommap=1 00:34:21.134 numjobs=1 00:34:21.134 00:34:21.134 [job0] 00:34:21.134 filename=/dev/nvme0n1 00:34:21.134 [job1] 00:34:21.134 filename=/dev/nvme0n2 00:34:21.134 [job2] 00:34:21.134 filename=/dev/nvme0n3 00:34:21.134 [job3] 00:34:21.134 filename=/dev/nvme0n4 00:34:21.134 Could not set queue depth (nvme0n1) 00:34:21.134 Could not set queue depth (nvme0n2) 00:34:21.134 Could not set queue depth (nvme0n3) 00:34:21.134 Could not set queue depth (nvme0n4) 00:34:21.134 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.134 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.134 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.134 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.134 fio-3.35 00:34:21.134 Starting 4 threads 00:34:24.426 14:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:24.426 14:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:24.426 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=471040, buflen=4096 00:34:24.427 fio: pid=2402653, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:24.683 14:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:24.683 14:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:24.683 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43970560, buflen=4096 00:34:24.683 fio: pid=2402639, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:24.940 14:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:24.940 14:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:24.940 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10637312, buflen=4096 00:34:24.940 fio: pid=2402599, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:25.198 14:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:25.198 14:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:25.198 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=50581504, buflen=4096 00:34:25.198 fio: pid=2402608, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:25.198 00:34:25.198 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2402599: Thu Dec 5 14:05:56 2024 00:34:25.198 read: IOPS=743, BW=2974KiB/s (3045kB/s)(10.1MiB/3493msec) 00:34:25.198 slat (usec): min=5, max=17879, avg=20.03, stdev=350.57 00:34:25.198 clat (usec): min=222, max=41353, avg=1312.23, stdev=6400.99 00:34:25.198 lat (usec): min=230, max=58998, avg=1332.25, stdev=6453.15 00:34:25.198 clat percentiles (usec): 00:34:25.198 | 1.00th=[ 233], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 260], 00:34:25.198 | 30.00th=[ 265], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:34:25.198 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 355], 95.00th=[ 375], 00:34:25.198 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:25.198 | 99.99th=[41157] 00:34:25.198 bw ( KiB/s): min= 96, max=10936, per=12.57%, avg=3438.67, stdev=4313.90, samples=6 00:34:25.198 iops : min= 24, max= 2734, avg=859.67, stdev=1078.48, samples=6 00:34:25.198 lat (usec) : 250=8.20%, 500=89.07%, 750=0.08% 00:34:25.198 lat (msec) : 4=0.08%, 50=2.54% 00:34:25.198 cpu : usr=0.49%, sys=1.72%, ctx=2599, majf=0, minf=1 00:34:25.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.198 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.198 issued rwts: total=2598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:25.198 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2402608: Thu Dec 5 14:05:56 2024 00:34:25.198 read: IOPS=3272, BW=12.8MiB/s (13.4MB/s)(48.2MiB/3774msec) 00:34:25.198 slat (usec): min=4, max=31153, avg=12.34, stdev=293.49 00:34:25.198 clat (usec): min=196, max=41113, avg=289.14, stdev=1270.55 00:34:25.198 lat (usec): min=201, max=46969, avg=301.48, stdev=1319.01 00:34:25.198 clat percentiles (usec): 00:34:25.198 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 227], 00:34:25.198 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:34:25.198 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 285], 00:34:25.198 | 99.00th=[ 437], 99.50th=[ 529], 99.90th=[ 1860], 99.95th=[41157], 00:34:25.198 | 99.99th=[41157] 00:34:25.198 bw ( KiB/s): min= 5590, max=17176, per=51.04%, avg=13955.14, stdev=3853.52, samples=7 00:34:25.198 iops : min= 1397, max= 4294, avg=3488.71, stdev=963.56, samples=7 00:34:25.198 lat (usec) : 250=56.66%, 500=42.74%, 750=0.40%, 1000=0.07% 00:34:25.198 lat (msec) : 2=0.02%, 50=0.10% 00:34:25.198 cpu : usr=1.83%, sys=4.61%, ctx=12353, majf=0, minf=1 00:34:25.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.198 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.198 issued rwts: total=12350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:25.198 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2402639: Thu Dec 5 14:05:56 2024 00:34:25.198 read: IOPS=3373, BW=13.2MiB/s (13.8MB/s)(41.9MiB/3182msec) 00:34:25.198 slat (nsec): min=4298, max=58153, avg=9041.23, stdev=5257.15 00:34:25.198 clat (usec): min=234, max=3833, avg=282.93, stdev=47.95 00:34:25.198 lat (usec): min=240, max=3847, avg=291.97, stdev=49.90 00:34:25.198 clat percentiles (usec): 00:34:25.198 | 1.00th=[ 243], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 260], 00:34:25.198 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:34:25.198 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 359], 00:34:25.198 | 99.00th=[ 392], 99.50th=[ 408], 99.90th=[ 457], 99.95th=[ 482], 00:34:25.198 | 99.99th=[ 1156] 00:34:25.198 bw ( KiB/s): min=12328, max=14392, per=49.77%, avg=13608.00, stdev=865.47, samples=6 00:34:25.198 iops : min= 3082, max= 3598, avg=3402.00, stdev=216.37, samples=6 00:34:25.198 lat (usec) : 250=7.70%, 500=92.25%, 750=0.02% 00:34:25.198 lat (msec) : 2=0.01%, 4=0.01% 00:34:25.198 cpu : usr=1.89%, sys=4.37%, ctx=10736, majf=0, minf=2 00:34:25.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.198 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.198 issued rwts: total=10736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:25.198 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2402653: Thu Dec 5 14:05:56 2024 00:34:25.198 read: IOPS=39, BW=158KiB/s (162kB/s)(460KiB/2910msec) 00:34:25.198 slat (nsec): min=8287, max=39416, avg=17809.21, stdev=6424.41 00:34:25.198 clat (usec): min=240, max=41296, avg=25069.08, stdev=19888.14 00:34:25.198 lat (usec): min=255, max=41313, avg=25086.92, stdev=19886.00 00:34:25.198 clat percentiles (usec): 00:34:25.198 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 330], 20.00th=[ 359], 00:34:25.198 | 30.00th=[ 408], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:34:25.198 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:25.198 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:25.198 | 99.99th=[41157] 00:34:25.198 bw ( KiB/s): min= 128, max= 224, per=0.61%, avg=168.00, stdev=51.22, samples=5 00:34:25.198 iops : min= 32, max= 56, avg=42.00, stdev=12.81, samples=5 00:34:25.198 lat (usec) : 250=0.86%, 500=36.21%, 750=1.72% 00:34:25.198 lat (msec) : 50=60.34% 00:34:25.198 cpu : usr=0.00%, sys=0.14%, ctx=119, majf=0, minf=2 00:34:25.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.198 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.198 issued rwts: total=116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:25.198 00:34:25.198 Run status group 0 (all jobs): 00:34:25.198 READ: bw=26.7MiB/s (28.0MB/s), 158KiB/s-13.2MiB/s (162kB/s-13.8MB/s), io=101MiB (106MB), run=2910-3774msec 00:34:25.198 00:34:25.198 Disk stats (read/write): 00:34:25.198 nvme0n1: ios=2593/0, merge=0/0, ticks=3220/0, in_queue=3220, util=95.37% 00:34:25.199 nvme0n2: ios=12345/0, merge=0/0, ticks=3283/0, in_queue=3283, util=95.47% 00:34:25.199 nvme0n3: ios=10529/0, merge=0/0, ticks=2791/0, in_queue=2791, util=96.76% 00:34:25.199 nvme0n4: ios=158/0, merge=0/0, ticks=2988/0, in_queue=2988, util=100.00% 00:34:25.456 14:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:25.456 14:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:25.715 14:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:25.715 14:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:25.974 14:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:25.974 14:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:26.232 14:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:26.232 14:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:26.493 14:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:26.493 14:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2402502 00:34:26.493 14:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:26.493 14:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:26.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:26.752 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:26.752 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:26.752 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:26.752 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:26.752 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:26.752 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:26.752 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:26.752 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:26.752 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:26.752 nvmf hotplug test: fio failed as expected 00:34:26.752 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:27.010 rmmod nvme_tcp 00:34:27.010 rmmod nvme_fabrics 00:34:27.010 rmmod nvme_keyring 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2400599 ']' 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2400599 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2400599 ']' 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2400599 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:27.010 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:27.011 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2400599 00:34:27.011 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:27.011 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:27.011 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2400599' 00:34:27.011 killing process with pid 2400599 00:34:27.011 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2400599 00:34:27.011 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2400599 00:34:27.270 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:27.270 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:27.270 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:27.270 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:27.270 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:27.270 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:27.270 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:27.270 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:27.270 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:27.270 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.270 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.270 14:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:29.810 00:34:29.810 real 0m23.945s 00:34:29.810 user 1m8.431s 00:34:29.810 sys 0m10.161s 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:29.810 ************************************ 00:34:29.810 END TEST nvmf_fio_target 00:34:29.810 ************************************ 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:29.810 ************************************ 00:34:29.810 START TEST nvmf_bdevio 00:34:29.810 ************************************ 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:29.810 * Looking for test storage... 00:34:29.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:29.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.810 --rc genhtml_branch_coverage=1 00:34:29.810 --rc genhtml_function_coverage=1 00:34:29.810 --rc genhtml_legend=1 00:34:29.810 --rc geninfo_all_blocks=1 00:34:29.810 --rc geninfo_unexecuted_blocks=1 00:34:29.810 00:34:29.810 ' 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:29.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.810 --rc genhtml_branch_coverage=1 00:34:29.810 --rc genhtml_function_coverage=1 00:34:29.810 --rc genhtml_legend=1 00:34:29.810 --rc geninfo_all_blocks=1 00:34:29.810 --rc geninfo_unexecuted_blocks=1 00:34:29.810 00:34:29.810 ' 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:29.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.810 --rc genhtml_branch_coverage=1 00:34:29.810 --rc genhtml_function_coverage=1 00:34:29.810 --rc genhtml_legend=1 00:34:29.810 --rc geninfo_all_blocks=1 00:34:29.810 --rc geninfo_unexecuted_blocks=1 00:34:29.810 00:34:29.810 ' 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:29.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.810 --rc genhtml_branch_coverage=1 00:34:29.810 --rc genhtml_function_coverage=1 00:34:29.810 --rc genhtml_legend=1 00:34:29.810 --rc geninfo_all_blocks=1 00:34:29.810 --rc geninfo_unexecuted_blocks=1 00:34:29.810 00:34:29.810 ' 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.810 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:29.811 14:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:31.712 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.713 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:31.713 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:31.713 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:31.713 Found net devices under 0000:09:00.0: cvl_0_0 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:31.713 Found net devices under 0000:09:00.1: cvl_0_1 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:31.713 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:31.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:31.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:34:31.714 00:34:31.714 --- 10.0.0.2 ping statistics --- 00:34:31.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.714 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:31.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:31.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:34:31.714 00:34:31.714 --- 10.0.0.1 ping statistics --- 00:34:31.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.714 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2405462 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2405462 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2405462 ']' 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:31.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:31.714 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:31.714 [2024-12-05 14:06:03.206732] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:31.714 [2024-12-05 14:06:03.207786] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:34:31.714 [2024-12-05 14:06:03.207841] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:31.973 [2024-12-05 14:06:03.281575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:31.973 [2024-12-05 14:06:03.340274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:31.973 [2024-12-05 14:06:03.340322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:31.973 [2024-12-05 14:06:03.340351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:31.973 [2024-12-05 14:06:03.340363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:31.973 [2024-12-05 14:06:03.340373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:31.973 [2024-12-05 14:06:03.341953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:31.973 [2024-12-05 14:06:03.342065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:31.973 [2024-12-05 14:06:03.342111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:31.973 [2024-12-05 14:06:03.342115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:31.973 [2024-12-05 14:06:03.429223] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:31.973 [2024-12-05 14:06:03.429453] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:31.973 [2024-12-05 14:06:03.429767] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:31.973 [2024-12-05 14:06:03.430444] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:31.973 [2024-12-05 14:06:03.430670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:31.974 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:31.974 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:31.974 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:31.974 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:31.974 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:31.974 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:31.974 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:31.974 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.974 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:31.974 [2024-12-05 14:06:03.490819] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:32.233 Malloc0 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:32.233 [2024-12-05 14:06:03.558969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:32.233 { 00:34:32.233 "params": { 00:34:32.233 "name": "Nvme$subsystem", 00:34:32.233 "trtype": "$TEST_TRANSPORT", 00:34:32.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:32.233 "adrfam": "ipv4", 00:34:32.233 "trsvcid": "$NVMF_PORT", 00:34:32.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:32.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:32.233 "hdgst": ${hdgst:-false}, 00:34:32.233 "ddgst": ${ddgst:-false} 00:34:32.233 }, 00:34:32.233 "method": "bdev_nvme_attach_controller" 00:34:32.233 } 00:34:32.233 EOF 00:34:32.233 )") 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:32.233 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:32.233 "params": { 00:34:32.233 "name": "Nvme1", 00:34:32.233 "trtype": "tcp", 00:34:32.233 "traddr": "10.0.0.2", 00:34:32.233 "adrfam": "ipv4", 00:34:32.233 "trsvcid": "4420", 00:34:32.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:32.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:32.233 "hdgst": false, 00:34:32.233 "ddgst": false 00:34:32.233 }, 00:34:32.233 "method": "bdev_nvme_attach_controller" 00:34:32.233 }' 00:34:32.233 [2024-12-05 14:06:03.610433] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:34:32.233 [2024-12-05 14:06:03.610505] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405494 ] 00:34:32.233 [2024-12-05 14:06:03.679777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:32.233 [2024-12-05 14:06:03.743405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:32.233 [2024-12-05 14:06:03.747438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:32.233 [2024-12-05 14:06:03.747450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.492 I/O targets: 00:34:32.492 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:32.492 00:34:32.492 00:34:32.492 CUnit - A unit testing framework for C - Version 2.1-3 00:34:32.492 http://cunit.sourceforge.net/ 00:34:32.492 00:34:32.492 00:34:32.492 Suite: bdevio tests on: Nvme1n1 00:34:32.492 Test: blockdev write read block ...passed 00:34:32.752 Test: blockdev write zeroes read block ...passed 00:34:32.752 Test: blockdev write zeroes read no split ...passed 00:34:32.752 Test: blockdev write zeroes read split ...passed 00:34:32.752 Test: blockdev write zeroes read split partial ...passed 00:34:32.752 Test: blockdev reset ...[2024-12-05 14:06:04.111733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:32.752 [2024-12-05 14:06:04.111832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168bcb0 (9): Bad file descriptor 00:34:32.753 [2024-12-05 14:06:04.204523] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:32.753 passed 00:34:32.753 Test: blockdev write read 8 blocks ...passed 00:34:32.753 Test: blockdev write read size > 128k ...passed 00:34:32.753 Test: blockdev write read invalid size ...passed 00:34:32.753 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:32.753 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:32.753 Test: blockdev write read max offset ...passed 00:34:33.012 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:33.012 Test: blockdev writev readv 8 blocks ...passed 00:34:33.012 Test: blockdev writev readv 30 x 1block ...passed 00:34:33.012 Test: blockdev writev readv block ...passed 00:34:33.012 Test: blockdev writev readv size > 128k ...passed 00:34:33.012 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:33.012 Test: blockdev comparev and writev ...[2024-12-05 14:06:04.376670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.012 [2024-12-05 14:06:04.376705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:33.012 [2024-12-05 14:06:04.376730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.012 [2024-12-05 14:06:04.376747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:33.012 [2024-12-05 14:06:04.377173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.012 [2024-12-05 14:06:04.377198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:33.012 [2024-12-05 14:06:04.377221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.012 [2024-12-05 14:06:04.377237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:33.012 [2024-12-05 14:06:04.377648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.012 [2024-12-05 14:06:04.377682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:33.012 [2024-12-05 14:06:04.377704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.012 [2024-12-05 14:06:04.377720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:33.012 [2024-12-05 14:06:04.378118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.012 [2024-12-05 14:06:04.378142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:33.012 [2024-12-05 14:06:04.378164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.012 [2024-12-05 14:06:04.378180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:33.012 passed 00:34:33.012 Test: blockdev nvme passthru rw ...passed 00:34:33.012 Test: blockdev nvme passthru vendor specific ...[2024-12-05 14:06:04.461692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:33.012 [2024-12-05 14:06:04.461719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:33.012 [2024-12-05 14:06:04.461885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:33.012 [2024-12-05 14:06:04.461909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:33.012 [2024-12-05 14:06:04.462070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:33.012 [2024-12-05 14:06:04.462093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:33.012 [2024-12-05 14:06:04.462252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:33.012 [2024-12-05 14:06:04.462275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:33.012 passed 00:34:33.012 Test: blockdev nvme admin passthru ...passed 00:34:33.012 Test: blockdev copy ...passed 00:34:33.012 00:34:33.012 Run Summary: Type Total Ran Passed Failed Inactive 00:34:33.012 suites 1 1 n/a 0 0 00:34:33.012 tests 23 23 23 0 0 00:34:33.012 asserts 152 152 152 0 n/a 00:34:33.012 00:34:33.012 Elapsed time = 1.108 seconds 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:33.271 rmmod nvme_tcp 00:34:33.271 rmmod nvme_fabrics 00:34:33.271 rmmod nvme_keyring 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2405462 ']' 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2405462 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2405462 ']' 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2405462 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:33.271 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2405462 00:34:33.530 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:33.530 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:33.530 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2405462' 00:34:33.530 killing process with pid 2405462 00:34:33.530 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2405462 00:34:33.530 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2405462 00:34:33.789 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:33.789 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:33.789 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:33.789 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:33.789 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:33.789 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:33.789 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:33.789 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:33.789 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:33.789 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.789 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:33.789 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.695 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:35.695 00:34:35.695 real 0m6.307s 00:34:35.695 user 0m8.250s 00:34:35.695 sys 0m2.446s 00:34:35.695 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:35.695 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:35.695 ************************************ 00:34:35.695 END TEST nvmf_bdevio 00:34:35.695 ************************************ 00:34:35.695 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:35.695 00:34:35.695 real 3m55.470s 00:34:35.695 user 8m57.452s 00:34:35.695 sys 1m24.614s 00:34:35.695 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:35.695 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:35.695 ************************************ 00:34:35.695 END TEST nvmf_target_core_interrupt_mode 00:34:35.695 ************************************ 00:34:35.695 14:06:07 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:35.695 14:06:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:35.695 14:06:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.695 14:06:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.695 ************************************ 00:34:35.695 START TEST nvmf_interrupt 00:34:35.695 ************************************ 00:34:35.695 14:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:35.954 * Looking for test storage... 00:34:35.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:35.954 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:35.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.955 --rc genhtml_branch_coverage=1 00:34:35.955 --rc genhtml_function_coverage=1 00:34:35.955 --rc genhtml_legend=1 00:34:35.955 --rc geninfo_all_blocks=1 00:34:35.955 --rc geninfo_unexecuted_blocks=1 00:34:35.955 00:34:35.955 ' 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:35.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.955 --rc genhtml_branch_coverage=1 00:34:35.955 --rc genhtml_function_coverage=1 00:34:35.955 --rc genhtml_legend=1 00:34:35.955 --rc geninfo_all_blocks=1 00:34:35.955 --rc geninfo_unexecuted_blocks=1 00:34:35.955 00:34:35.955 ' 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:35.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.955 --rc genhtml_branch_coverage=1 00:34:35.955 --rc genhtml_function_coverage=1 00:34:35.955 --rc genhtml_legend=1 00:34:35.955 --rc geninfo_all_blocks=1 00:34:35.955 --rc geninfo_unexecuted_blocks=1 00:34:35.955 00:34:35.955 ' 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:35.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.955 --rc genhtml_branch_coverage=1 00:34:35.955 --rc genhtml_function_coverage=1 00:34:35.955 --rc genhtml_legend=1 00:34:35.955 --rc geninfo_all_blocks=1 00:34:35.955 --rc geninfo_unexecuted_blocks=1 00:34:35.955 00:34:35.955 ' 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:35.955 14:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:37.859 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:37.859 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:37.859 Found net devices under 0000:09:00.0: cvl_0_0 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:37.859 Found net devices under 0000:09:00.1: cvl_0_1 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:37.859 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:38.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:38.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:34:38.118 00:34:38.118 --- 10.0.0.2 ping statistics --- 00:34:38.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.118 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:38.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:38.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:34:38.118 00:34:38.118 --- 10.0.0.1 ping statistics --- 00:34:38.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.118 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2408068 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2408068 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2408068 ']' 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.118 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:38.118 [2024-12-05 14:06:09.545280] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:38.118 [2024-12-05 14:06:09.546361] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:34:38.118 [2024-12-05 14:06:09.546436] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:38.118 [2024-12-05 14:06:09.618227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:38.377 [2024-12-05 14:06:09.674510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:38.377 [2024-12-05 14:06:09.674563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:38.377 [2024-12-05 14:06:09.674581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:38.377 [2024-12-05 14:06:09.674599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:38.377 [2024-12-05 14:06:09.674609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:38.377 [2024-12-05 14:06:09.675981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.377 [2024-12-05 14:06:09.675986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.377 [2024-12-05 14:06:09.762925] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:38.377 [2024-12-05 14:06:09.762929] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:38.377 [2024-12-05 14:06:09.763198] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:38.377 5000+0 records in 00:34:38.377 5000+0 records out 00:34:38.377 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0136437 s, 751 MB/s 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:38.377 AIO0 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:38.377 [2024-12-05 14:06:09.864600] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:38.377 [2024-12-05 14:06:09.888799] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2408068 0 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2408068 0 idle 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2408068 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2408068 -w 256 00:34:38.377 14:06:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2408068 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.27 reactor_0' 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2408068 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.27 reactor_0 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2408068 1 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2408068 1 idle 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2408068 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2408068 -w 256 00:34:38.635 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2408085 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2408085 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2408246 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2408068 0 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2408068 0 busy 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2408068 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2408068 -w 256 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2408068 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.48 reactor_0' 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2408068 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.48 reactor_0 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2408068 1 00:34:38.893 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2408068 1 busy 00:34:38.894 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2408068 00:34:38.894 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:38.894 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:38.894 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:38.894 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:38.894 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:38.894 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:38.894 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:38.894 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:38.894 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2408068 -w 256 00:34:38.894 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:39.153 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2408085 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.27 reactor_1' 00:34:39.153 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2408085 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.27 reactor_1 00:34:39.153 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:39.153 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:39.153 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:39.153 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:39.153 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:39.153 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:39.153 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:39.153 14:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:39.153 14:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2408246 00:34:49.195 Initializing NVMe Controllers 00:34:49.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:49.195 Controller IO queue size 256, less than required. 00:34:49.195 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:49.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:49.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:49.195 Initialization complete. Launching workers. 00:34:49.195 ======================================================== 00:34:49.195 Latency(us) 00:34:49.195 Device Information : IOPS MiB/s Average min max 00:34:49.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13898.80 54.29 18431.30 4372.12 22196.59 00:34:49.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13321.20 52.04 19230.92 4030.72 23335.58 00:34:49.195 ======================================================== 00:34:49.195 Total : 27219.99 106.33 18822.63 4030.72 23335.58 00:34:49.195 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2408068 0 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2408068 0 idle 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2408068 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2408068 -w 256 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2408068 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.21 reactor_0' 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2408068 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.21 reactor_0 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2408068 1 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2408068 1 idle 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2408068 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2408068 -w 256 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2408085 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.97 reactor_1' 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2408085 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.97 reactor_1 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:49.195 14:06:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:49.455 14:06:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:49.455 14:06:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:34:49.455 14:06:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:49.455 14:06:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:49.455 14:06:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:34:51.996 14:06:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:51.996 14:06:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:51.996 14:06:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:51.996 14:06:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:51.996 14:06:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:51.996 14:06:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:34:51.996 14:06:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:51.996 14:06:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2408068 0 00:34:51.996 14:06:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2408068 0 idle 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2408068 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2408068 -w 256 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2408068 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.31 reactor_0' 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2408068 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.31 reactor_0 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2408068 1 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2408068 1 idle 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2408068 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2408068 -w 256 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2408085 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.02 reactor_1' 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2408085 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.02 reactor_1 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:51.996 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:51.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:51.997 rmmod nvme_tcp 00:34:51.997 rmmod nvme_fabrics 00:34:51.997 rmmod nvme_keyring 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2408068 ']' 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2408068 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2408068 ']' 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2408068 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:51.997 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2408068 00:34:52.255 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:52.255 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:52.255 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2408068' 00:34:52.255 killing process with pid 2408068 00:34:52.255 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2408068 00:34:52.255 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2408068 00:34:52.514 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:52.514 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:52.514 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:52.514 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:52.514 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:34:52.514 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:52.514 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:34:52.514 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:52.514 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:52.514 14:06:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:52.514 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:52.514 14:06:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.421 14:06:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:54.421 00:34:54.421 real 0m18.644s 00:34:54.421 user 0m36.476s 00:34:54.421 sys 0m6.903s 00:34:54.421 14:06:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:54.421 14:06:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:54.421 ************************************ 00:34:54.421 END TEST nvmf_interrupt 00:34:54.421 ************************************ 00:34:54.421 00:34:54.421 real 24m57.456s 00:34:54.421 user 58m36.533s 00:34:54.421 sys 6m42.500s 00:34:54.421 14:06:25 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:54.421 14:06:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.421 ************************************ 00:34:54.421 END TEST nvmf_tcp 00:34:54.421 ************************************ 00:34:54.421 14:06:25 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:34:54.421 14:06:25 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:54.421 14:06:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:54.421 14:06:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:54.421 14:06:25 -- common/autotest_common.sh@10 -- # set +x 00:34:54.421 ************************************ 00:34:54.421 START TEST spdkcli_nvmf_tcp 00:34:54.421 ************************************ 00:34:54.421 14:06:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:54.421 * Looking for test storage... 00:34:54.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:54.679 14:06:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:54.679 14:06:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:34:54.679 14:06:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:54.679 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:54.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.680 --rc genhtml_branch_coverage=1 00:34:54.680 --rc genhtml_function_coverage=1 00:34:54.680 --rc genhtml_legend=1 00:34:54.680 --rc geninfo_all_blocks=1 00:34:54.680 --rc geninfo_unexecuted_blocks=1 00:34:54.680 00:34:54.680 ' 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:54.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.680 --rc genhtml_branch_coverage=1 00:34:54.680 --rc genhtml_function_coverage=1 00:34:54.680 --rc genhtml_legend=1 00:34:54.680 --rc geninfo_all_blocks=1 00:34:54.680 --rc geninfo_unexecuted_blocks=1 00:34:54.680 00:34:54.680 ' 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:54.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.680 --rc genhtml_branch_coverage=1 00:34:54.680 --rc genhtml_function_coverage=1 00:34:54.680 --rc genhtml_legend=1 00:34:54.680 --rc geninfo_all_blocks=1 00:34:54.680 --rc geninfo_unexecuted_blocks=1 00:34:54.680 00:34:54.680 ' 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:54.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.680 --rc genhtml_branch_coverage=1 00:34:54.680 --rc genhtml_function_coverage=1 00:34:54.680 --rc genhtml_legend=1 00:34:54.680 --rc geninfo_all_blocks=1 00:34:54.680 --rc geninfo_unexecuted_blocks=1 00:34:54.680 00:34:54.680 ' 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:54.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2410264 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2410264 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2410264 ']' 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:54.680 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:54.681 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:54.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:54.681 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:54.681 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.681 [2024-12-05 14:06:26.094648] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:34:54.681 [2024-12-05 14:06:26.094738] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410264 ] 00:34:54.681 [2024-12-05 14:06:26.160238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:54.939 [2024-12-05 14:06:26.218512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.939 [2024-12-05 14:06:26.218516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.939 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:54.939 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:34:54.939 14:06:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:54.939 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:54.939 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.939 14:06:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:54.939 14:06:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:54.939 14:06:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:54.939 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:54.939 14:06:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.939 14:06:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:54.939 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:54.939 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:54.939 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:54.939 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:54.939 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:54.939 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:54.939 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:54.939 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:54.939 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:54.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:54.939 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:54.939 ' 00:34:57.476 [2024-12-05 14:06:28.960995] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:58.856 [2024-12-05 14:06:30.233450] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:01.396 [2024-12-05 14:06:32.576447] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:03.303 [2024-12-05 14:06:34.586699] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:04.679 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:04.679 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:04.679 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:04.679 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:04.679 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:04.679 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:04.679 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:04.679 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:04.679 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:04.679 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:04.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:04.679 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:04.937 14:06:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:04.937 14:06:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:04.937 14:06:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:04.937 14:06:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:04.937 14:06:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:04.937 14:06:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:04.937 14:06:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:04.937 14:06:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:05.504 14:06:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:05.504 14:06:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:05.504 14:06:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:05.504 14:06:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:05.504 14:06:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:05.504 14:06:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:05.504 14:06:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:05.504 14:06:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:05.504 14:06:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:05.504 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:05.504 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:05.504 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:05.504 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:05.504 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:05.504 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:05.504 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:05.504 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:05.504 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:05.504 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:05.504 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:05.504 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:05.504 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:05.504 ' 00:35:10.770 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:10.770 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:10.770 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:10.770 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:10.770 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:10.770 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:10.770 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:10.770 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:10.770 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:10.770 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:10.770 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:10.770 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:10.770 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:10.770 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:10.770 14:06:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:10.770 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:10.770 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:10.770 14:06:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2410264 00:35:10.770 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2410264 ']' 00:35:10.770 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2410264 00:35:10.770 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:10.770 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.770 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2410264 00:35:10.770 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:10.770 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:10.770 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2410264' 00:35:10.770 killing process with pid 2410264 00:35:10.770 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2410264 00:35:10.770 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2410264 00:35:11.029 14:06:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:11.029 14:06:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:11.029 14:06:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2410264 ']' 00:35:11.029 14:06:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2410264 00:35:11.029 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2410264 ']' 00:35:11.029 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2410264 00:35:11.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2410264) - No such process 00:35:11.029 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2410264 is not found' 00:35:11.029 Process with pid 2410264 is not found 00:35:11.029 14:06:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:11.029 14:06:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:11.029 14:06:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:11.029 00:35:11.029 real 0m16.571s 00:35:11.029 user 0m35.341s 00:35:11.029 sys 0m0.754s 00:35:11.029 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:11.029 14:06:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:11.029 ************************************ 00:35:11.029 END TEST spdkcli_nvmf_tcp 00:35:11.029 ************************************ 00:35:11.029 14:06:42 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:11.029 14:06:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:11.029 14:06:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:11.029 14:06:42 -- common/autotest_common.sh@10 -- # set +x 00:35:11.029 ************************************ 00:35:11.029 START TEST nvmf_identify_passthru 00:35:11.029 ************************************ 00:35:11.029 14:06:42 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:11.288 * Looking for test storage... 00:35:11.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:11.288 14:06:42 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:11.288 14:06:42 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:35:11.288 14:06:42 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:11.288 14:06:42 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:11.289 14:06:42 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:11.289 14:06:42 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:11.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.289 --rc genhtml_branch_coverage=1 00:35:11.289 --rc genhtml_function_coverage=1 00:35:11.289 --rc genhtml_legend=1 00:35:11.289 --rc geninfo_all_blocks=1 00:35:11.289 --rc geninfo_unexecuted_blocks=1 00:35:11.289 00:35:11.289 ' 00:35:11.289 14:06:42 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:11.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.289 --rc genhtml_branch_coverage=1 00:35:11.289 --rc genhtml_function_coverage=1 00:35:11.289 --rc genhtml_legend=1 00:35:11.289 --rc geninfo_all_blocks=1 00:35:11.289 --rc geninfo_unexecuted_blocks=1 00:35:11.289 00:35:11.289 ' 00:35:11.289 14:06:42 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:11.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.289 --rc genhtml_branch_coverage=1 00:35:11.289 --rc genhtml_function_coverage=1 00:35:11.289 --rc genhtml_legend=1 00:35:11.289 --rc geninfo_all_blocks=1 00:35:11.289 --rc geninfo_unexecuted_blocks=1 00:35:11.289 00:35:11.289 ' 00:35:11.289 14:06:42 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:11.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.289 --rc genhtml_branch_coverage=1 00:35:11.289 --rc genhtml_function_coverage=1 00:35:11.289 --rc genhtml_legend=1 00:35:11.289 --rc geninfo_all_blocks=1 00:35:11.289 --rc geninfo_unexecuted_blocks=1 00:35:11.289 00:35:11.289 ' 00:35:11.289 14:06:42 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:11.289 14:06:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.289 14:06:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.289 14:06:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.289 14:06:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:11.289 14:06:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:11.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:11.289 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:11.289 14:06:42 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:11.289 14:06:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:11.289 14:06:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.289 14:06:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.289 14:06:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.289 14:06:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:11.290 14:06:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.290 14:06:42 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:11.290 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:11.290 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:11.290 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:11.290 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:11.290 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:11.290 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.290 14:06:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:11.290 14:06:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.290 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:11.290 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:11.290 14:06:42 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:11.290 14:06:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:13.824 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:13.825 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:13.825 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:13.825 Found net devices under 0000:09:00.0: cvl_0_0 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:13.825 Found net devices under 0000:09:00.1: cvl_0_1 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:13.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:13.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:35:13.825 00:35:13.825 --- 10.0.0.2 ping statistics --- 00:35:13.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.825 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:13.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:13.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:35:13.825 00:35:13.825 --- 10.0.0.1 ping statistics --- 00:35:13.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.825 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:13.825 14:06:44 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:13.825 14:06:44 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:13.825 14:06:44 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.825 14:06:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.825 14:06:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:13.825 14:06:44 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:13.825 14:06:44 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:13.825 14:06:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:13.825 14:06:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:13.825 14:06:44 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:13.825 14:06:44 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:13.825 14:06:44 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:13.825 14:06:44 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:13.825 14:06:44 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:13.825 14:06:44 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:13.825 14:06:44 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:35:13.825 14:06:44 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:0b:00.0 00:35:13.825 14:06:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:35:13.825 14:06:44 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:35:13.825 14:06:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:35:13.825 14:06:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:13.825 14:06:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:18.017 14:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:35:18.017 14:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:35:18.017 14:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:18.017 14:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:22.218 14:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:22.218 14:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:22.218 14:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:22.218 14:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2414831 00:35:22.218 14:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:22.218 14:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:22.218 14:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2414831 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2414831 ']' 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:22.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:22.218 [2024-12-05 14:06:53.384692] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:35:22.218 [2024-12-05 14:06:53.384794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:22.218 [2024-12-05 14:06:53.458217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:22.218 [2024-12-05 14:06:53.515369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:22.218 [2024-12-05 14:06:53.515451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:22.218 [2024-12-05 14:06:53.515476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:22.218 [2024-12-05 14:06:53.515487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:22.218 [2024-12-05 14:06:53.515512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:22.218 [2024-12-05 14:06:53.517017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:22.218 [2024-12-05 14:06:53.517082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:22.218 [2024-12-05 14:06:53.517146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:22.218 [2024-12-05 14:06:53.517149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:35:22.218 14:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:22.218 INFO: Log level set to 20 00:35:22.218 INFO: Requests: 00:35:22.218 { 00:35:22.218 "jsonrpc": "2.0", 00:35:22.218 "method": "nvmf_set_config", 00:35:22.218 "id": 1, 00:35:22.218 "params": { 00:35:22.218 "admin_cmd_passthru": { 00:35:22.218 "identify_ctrlr": true 00:35:22.218 } 00:35:22.218 } 00:35:22.218 } 00:35:22.218 00:35:22.218 INFO: response: 00:35:22.218 { 00:35:22.218 "jsonrpc": "2.0", 00:35:22.218 "id": 1, 00:35:22.218 "result": true 00:35:22.218 } 00:35:22.218 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.218 14:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.218 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:22.218 INFO: Setting log level to 20 00:35:22.218 INFO: Setting log level to 20 00:35:22.218 INFO: Log level set to 20 00:35:22.218 INFO: Log level set to 20 00:35:22.218 INFO: Requests: 00:35:22.218 { 00:35:22.218 "jsonrpc": "2.0", 00:35:22.218 "method": "framework_start_init", 00:35:22.218 "id": 1 00:35:22.218 } 00:35:22.218 00:35:22.218 INFO: Requests: 00:35:22.218 { 00:35:22.218 "jsonrpc": "2.0", 00:35:22.218 "method": "framework_start_init", 00:35:22.218 "id": 1 00:35:22.218 } 00:35:22.218 00:35:22.218 [2024-12-05 14:06:53.722750] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:22.218 INFO: response: 00:35:22.218 { 00:35:22.218 "jsonrpc": "2.0", 00:35:22.218 "id": 1, 00:35:22.219 "result": true 00:35:22.219 } 00:35:22.219 00:35:22.219 INFO: response: 00:35:22.219 { 00:35:22.219 "jsonrpc": "2.0", 00:35:22.219 "id": 1, 00:35:22.219 "result": true 00:35:22.219 } 00:35:22.219 00:35:22.219 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.219 14:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:22.219 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.219 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:22.219 INFO: Setting log level to 40 00:35:22.219 INFO: Setting log level to 40 00:35:22.219 INFO: Setting log level to 40 00:35:22.219 [2024-12-05 14:06:53.732888] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:22.219 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.219 14:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:22.219 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:22.219 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:22.477 14:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:35:22.477 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.477 14:06:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.834 Nvme0n1 00:35:25.834 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.834 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:25.834 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.834 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.834 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.834 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:25.834 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.834 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.834 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.834 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:25.834 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.834 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.834 [2024-12-05 14:06:56.633972] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:25.834 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.834 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:25.834 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.834 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.834 [ 00:35:25.834 { 00:35:25.834 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:25.834 "subtype": "Discovery", 00:35:25.834 "listen_addresses": [], 00:35:25.834 "allow_any_host": true, 00:35:25.834 "hosts": [] 00:35:25.834 }, 00:35:25.834 { 00:35:25.834 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:25.834 "subtype": "NVMe", 00:35:25.834 "listen_addresses": [ 00:35:25.834 { 00:35:25.834 "trtype": "TCP", 00:35:25.834 "adrfam": "IPv4", 00:35:25.834 "traddr": "10.0.0.2", 00:35:25.834 "trsvcid": "4420" 00:35:25.834 } 00:35:25.834 ], 00:35:25.834 "allow_any_host": true, 00:35:25.834 "hosts": [], 00:35:25.834 "serial_number": "SPDK00000000000001", 00:35:25.834 "model_number": "SPDK bdev Controller", 00:35:25.834 "max_namespaces": 1, 00:35:25.834 "min_cntlid": 1, 00:35:25.834 "max_cntlid": 65519, 00:35:25.834 "namespaces": [ 00:35:25.834 { 00:35:25.834 "nsid": 1, 00:35:25.834 "bdev_name": "Nvme0n1", 00:35:25.834 "name": "Nvme0n1", 00:35:25.834 "nguid": "0690E3A5775F4A59914F8D8B4D4A98C0", 00:35:25.834 "uuid": "0690e3a5-775f-4a59-914f-8d8b4d4a98c0" 00:35:25.834 } 00:35:25.834 ] 00:35:25.834 } 00:35:25.834 ] 00:35:25.834 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.834 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:25.834 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:25.834 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:25.834 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:35:25.834 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:25.834 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:25.834 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:25.834 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:25.835 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:35:25.835 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:25.835 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:25.835 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.835 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.835 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.835 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:25.835 14:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:25.835 14:06:56 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:25.835 14:06:56 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:25.835 14:06:56 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:25.835 14:06:56 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:25.835 14:06:56 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:25.835 14:06:56 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:25.835 rmmod nvme_tcp 00:35:25.835 rmmod nvme_fabrics 00:35:25.835 rmmod nvme_keyring 00:35:25.835 14:06:56 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:25.835 14:06:56 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:25.835 14:06:56 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:25.835 14:06:56 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2414831 ']' 00:35:25.835 14:06:56 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2414831 00:35:25.835 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2414831 ']' 00:35:25.835 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2414831 00:35:25.835 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:35:25.835 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.835 14:06:56 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2414831 00:35:25.835 14:06:57 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:25.835 14:06:57 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:25.835 14:06:57 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2414831' 00:35:25.835 killing process with pid 2414831 00:35:25.835 14:06:57 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2414831 00:35:25.835 14:06:57 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2414831 00:35:27.214 14:06:58 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:27.214 14:06:58 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:27.214 14:06:58 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:27.214 14:06:58 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:27.214 14:06:58 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:27.214 14:06:58 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:27.214 14:06:58 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:27.214 14:06:58 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:27.214 14:06:58 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:27.214 14:06:58 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.214 14:06:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:27.214 14:06:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.121 14:07:00 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:29.121 00:35:29.121 real 0m18.062s 00:35:29.121 user 0m25.842s 00:35:29.121 sys 0m3.194s 00:35:29.121 14:07:00 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:29.121 14:07:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:29.121 ************************************ 00:35:29.121 END TEST nvmf_identify_passthru 00:35:29.121 ************************************ 00:35:29.121 14:07:00 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:29.121 14:07:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:29.121 14:07:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:29.121 14:07:00 -- common/autotest_common.sh@10 -- # set +x 00:35:29.121 ************************************ 00:35:29.121 START TEST nvmf_dif 00:35:29.121 ************************************ 00:35:29.121 14:07:00 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:29.378 * Looking for test storage... 00:35:29.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:29.378 14:07:00 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:29.378 14:07:00 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:35:29.378 14:07:00 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:29.378 14:07:00 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:29.378 14:07:00 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:29.378 14:07:00 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:29.378 14:07:00 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:29.378 14:07:00 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:29.378 14:07:00 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:29.378 14:07:00 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:29.378 14:07:00 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:29.378 14:07:00 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:29.378 14:07:00 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:29.378 14:07:00 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:29.378 14:07:00 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:29.378 14:07:00 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:29.379 14:07:00 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:29.379 14:07:00 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:29.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.379 --rc genhtml_branch_coverage=1 00:35:29.379 --rc genhtml_function_coverage=1 00:35:29.379 --rc genhtml_legend=1 00:35:29.379 --rc geninfo_all_blocks=1 00:35:29.379 --rc geninfo_unexecuted_blocks=1 00:35:29.379 00:35:29.379 ' 00:35:29.379 14:07:00 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:29.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.379 --rc genhtml_branch_coverage=1 00:35:29.379 --rc genhtml_function_coverage=1 00:35:29.379 --rc genhtml_legend=1 00:35:29.379 --rc geninfo_all_blocks=1 00:35:29.379 --rc geninfo_unexecuted_blocks=1 00:35:29.379 00:35:29.379 ' 00:35:29.379 14:07:00 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:29.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.379 --rc genhtml_branch_coverage=1 00:35:29.379 --rc genhtml_function_coverage=1 00:35:29.379 --rc genhtml_legend=1 00:35:29.379 --rc geninfo_all_blocks=1 00:35:29.379 --rc geninfo_unexecuted_blocks=1 00:35:29.379 00:35:29.379 ' 00:35:29.379 14:07:00 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:29.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.379 --rc genhtml_branch_coverage=1 00:35:29.379 --rc genhtml_function_coverage=1 00:35:29.379 --rc genhtml_legend=1 00:35:29.379 --rc geninfo_all_blocks=1 00:35:29.379 --rc geninfo_unexecuted_blocks=1 00:35:29.379 00:35:29.379 ' 00:35:29.379 14:07:00 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:29.379 14:07:00 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:29.379 14:07:00 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.379 14:07:00 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.379 14:07:00 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.379 14:07:00 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:29.379 14:07:00 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:29.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:29.379 14:07:00 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:29.379 14:07:00 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:29.379 14:07:00 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:29.379 14:07:00 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:29.379 14:07:00 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.379 14:07:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:29.379 14:07:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:29.379 14:07:00 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:29.380 14:07:00 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:29.380 14:07:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:31.911 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:31.911 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:31.911 Found net devices under 0000:09:00.0: cvl_0_0 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:31.911 Found net devices under 0000:09:00.1: cvl_0_1 00:35:31.911 14:07:02 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:31.912 14:07:02 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:31.912 14:07:03 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:31.912 14:07:03 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:31.912 14:07:03 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:31.912 14:07:03 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:31.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:31.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:35:31.912 00:35:31.912 --- 10.0.0.2 ping statistics --- 00:35:31.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.912 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:35:31.912 14:07:03 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:31.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:31.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:35:31.912 00:35:31.912 --- 10.0.0.1 ping statistics --- 00:35:31.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.912 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:35:31.912 14:07:03 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:31.912 14:07:03 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:31.912 14:07:03 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:31.912 14:07:03 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:32.844 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:32.844 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:32.844 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:32.845 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:32.845 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:32.845 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:32.845 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:32.845 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:32.845 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:32.845 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:32.845 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:32.845 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:32.845 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:32.845 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:32.845 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:32.845 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:32.845 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:32.845 14:07:04 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:32.845 14:07:04 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:32.845 14:07:04 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:32.845 14:07:04 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:32.845 14:07:04 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:32.845 14:07:04 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:33.102 14:07:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:33.102 14:07:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:33.102 14:07:04 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:33.102 14:07:04 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:33.102 14:07:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:33.102 14:07:04 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2418071 00:35:33.102 14:07:04 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:33.102 14:07:04 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2418071 00:35:33.102 14:07:04 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2418071 ']' 00:35:33.102 14:07:04 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.102 14:07:04 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:33.102 14:07:04 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.102 14:07:04 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:33.102 14:07:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:33.102 [2024-12-05 14:07:04.441649] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:35:33.102 [2024-12-05 14:07:04.441751] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:33.102 [2024-12-05 14:07:04.512139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.102 [2024-12-05 14:07:04.563063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:33.102 [2024-12-05 14:07:04.563122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:33.102 [2024-12-05 14:07:04.563149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:33.102 [2024-12-05 14:07:04.563161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:33.102 [2024-12-05 14:07:04.563170] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:33.102 [2024-12-05 14:07:04.563732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.361 14:07:04 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:33.361 14:07:04 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:35:33.361 14:07:04 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:33.361 14:07:04 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.361 14:07:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:33.361 14:07:04 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:33.361 14:07:04 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:33.361 14:07:04 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:33.361 14:07:04 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.361 14:07:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:33.361 [2024-12-05 14:07:04.702990] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.361 14:07:04 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.361 14:07:04 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:33.361 14:07:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:33.361 14:07:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:33.361 14:07:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:33.361 ************************************ 00:35:33.361 START TEST fio_dif_1_default 00:35:33.361 ************************************ 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:33.361 bdev_null0 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.361 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:33.362 [2024-12-05 14:07:04.759274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:33.362 { 00:35:33.362 "params": { 00:35:33.362 "name": "Nvme$subsystem", 00:35:33.362 "trtype": "$TEST_TRANSPORT", 00:35:33.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:33.362 "adrfam": "ipv4", 00:35:33.362 "trsvcid": "$NVMF_PORT", 00:35:33.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:33.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:33.362 "hdgst": ${hdgst:-false}, 00:35:33.362 "ddgst": ${ddgst:-false} 00:35:33.362 }, 00:35:33.362 "method": "bdev_nvme_attach_controller" 00:35:33.362 } 00:35:33.362 EOF 00:35:33.362 )") 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:33.362 "params": { 00:35:33.362 "name": "Nvme0", 00:35:33.362 "trtype": "tcp", 00:35:33.362 "traddr": "10.0.0.2", 00:35:33.362 "adrfam": "ipv4", 00:35:33.362 "trsvcid": "4420", 00:35:33.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:33.362 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:33.362 "hdgst": false, 00:35:33.362 "ddgst": false 00:35:33.362 }, 00:35:33.362 "method": "bdev_nvme_attach_controller" 00:35:33.362 }' 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:33.362 14:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.620 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:33.620 fio-3.35 00:35:33.621 Starting 1 thread 00:35:45.854 00:35:45.854 filename0: (groupid=0, jobs=1): err= 0: pid=2418299: Thu Dec 5 14:07:15 2024 00:35:45.854 read: IOPS=844, BW=3377KiB/s (3458kB/s)(33.1MiB/10030msec) 00:35:45.854 slat (nsec): min=4421, max=35669, avg=9672.12, stdev=2733.53 00:35:45.854 clat (usec): min=519, max=42700, avg=4708.03, stdev=12224.47 00:35:45.854 lat (usec): min=527, max=42713, avg=4717.70, stdev=12224.49 00:35:45.854 clat percentiles (usec): 00:35:45.854 | 1.00th=[ 562], 5.00th=[ 578], 10.00th=[ 594], 20.00th=[ 603], 00:35:45.854 | 30.00th=[ 619], 40.00th=[ 644], 50.00th=[ 668], 60.00th=[ 685], 00:35:45.854 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 898], 95.00th=[41681], 00:35:45.854 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:35:45.854 | 99.99th=[42730] 00:35:45.854 bw ( KiB/s): min= 1408, max= 7872, per=100.00%, avg=3385.60, stdev=1846.00, samples=20 00:35:45.854 iops : min= 352, max= 1968, avg=846.40, stdev=461.50, samples=20 00:35:45.854 lat (usec) : 750=81.12%, 1000=8.96% 00:35:45.854 lat (msec) : 10=0.05%, 50=9.87% 00:35:45.854 cpu : usr=90.65%, sys=9.03%, ctx=14, majf=0, minf=203 00:35:45.854 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:45.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.854 issued rwts: total=8468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.854 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:45.854 00:35:45.854 Run status group 0 (all jobs): 00:35:45.854 READ: bw=3377KiB/s (3458kB/s), 3377KiB/s-3377KiB/s (3458kB/s-3458kB/s), io=33.1MiB (34.7MB), run=10030-10030msec 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.854 00:35:45.854 real 0m11.139s 00:35:45.854 user 0m10.314s 00:35:45.854 sys 0m1.179s 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:45.854 ************************************ 00:35:45.854 END TEST fio_dif_1_default 00:35:45.854 ************************************ 00:35:45.854 14:07:15 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:45.854 14:07:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:45.854 14:07:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:45.854 14:07:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:45.854 ************************************ 00:35:45.854 START TEST fio_dif_1_multi_subsystems 00:35:45.854 ************************************ 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.854 bdev_null0 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.854 [2024-12-05 14:07:15.952862] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.854 bdev_null1 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:45.854 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:45.855 { 00:35:45.855 "params": { 00:35:45.855 "name": "Nvme$subsystem", 00:35:45.855 "trtype": "$TEST_TRANSPORT", 00:35:45.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.855 "adrfam": "ipv4", 00:35:45.855 "trsvcid": "$NVMF_PORT", 00:35:45.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.855 "hdgst": ${hdgst:-false}, 00:35:45.855 "ddgst": ${ddgst:-false} 00:35:45.855 }, 00:35:45.855 "method": "bdev_nvme_attach_controller" 00:35:45.855 } 00:35:45.855 EOF 00:35:45.855 )") 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:45.855 { 00:35:45.855 "params": { 00:35:45.855 "name": "Nvme$subsystem", 00:35:45.855 "trtype": "$TEST_TRANSPORT", 00:35:45.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.855 "adrfam": "ipv4", 00:35:45.855 "trsvcid": "$NVMF_PORT", 00:35:45.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.855 "hdgst": ${hdgst:-false}, 00:35:45.855 "ddgst": ${ddgst:-false} 00:35:45.855 }, 00:35:45.855 "method": "bdev_nvme_attach_controller" 00:35:45.855 } 00:35:45.855 EOF 00:35:45.855 )") 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:35:45.855 14:07:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:35:45.855 14:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:45.855 "params": { 00:35:45.855 "name": "Nvme0", 00:35:45.855 "trtype": "tcp", 00:35:45.855 "traddr": "10.0.0.2", 00:35:45.855 "adrfam": "ipv4", 00:35:45.855 "trsvcid": "4420", 00:35:45.855 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:45.855 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:45.855 "hdgst": false, 00:35:45.855 "ddgst": false 00:35:45.855 }, 00:35:45.855 "method": "bdev_nvme_attach_controller" 00:35:45.855 },{ 00:35:45.855 "params": { 00:35:45.855 "name": "Nvme1", 00:35:45.855 "trtype": "tcp", 00:35:45.855 "traddr": "10.0.0.2", 00:35:45.855 "adrfam": "ipv4", 00:35:45.855 "trsvcid": "4420", 00:35:45.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:45.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:45.855 "hdgst": false, 00:35:45.855 "ddgst": false 00:35:45.855 }, 00:35:45.855 "method": "bdev_nvme_attach_controller" 00:35:45.855 }' 00:35:45.855 14:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:45.855 14:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:45.855 14:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:45.855 14:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.855 14:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:45.855 14:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:45.855 14:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:45.855 14:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:45.855 14:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:45.855 14:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.855 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:45.855 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:45.855 fio-3.35 00:35:45.855 Starting 2 threads 00:35:55.827 00:35:55.827 filename0: (groupid=0, jobs=1): err= 0: pid=2419698: Thu Dec 5 14:07:27 2024 00:35:55.827 read: IOPS=114, BW=460KiB/s (471kB/s)(4608KiB/10020msec) 00:35:55.827 slat (nsec): min=4345, max=72842, avg=11995.06, stdev=6114.06 00:35:55.827 clat (usec): min=574, max=45383, avg=34751.06, stdev=14684.40 00:35:55.827 lat (usec): min=582, max=45397, avg=34763.06, stdev=14684.17 00:35:55.827 clat percentiles (usec): 00:35:55.827 | 1.00th=[ 586], 5.00th=[ 611], 10.00th=[ 652], 20.00th=[40633], 00:35:55.827 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:55.827 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:55.827 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:35:55.827 | 99.99th=[45351] 00:35:55.827 bw ( KiB/s): min= 384, max= 576, per=51.36%, avg=459.20, stdev=45.58, samples=20 00:35:55.827 iops : min= 96, max= 144, avg=114.80, stdev=11.40, samples=20 00:35:55.828 lat (usec) : 750=14.93% 00:35:55.828 lat (msec) : 2=0.69%, 50=84.38% 00:35:55.828 cpu : usr=97.37%, sys=2.27%, ctx=19, majf=0, minf=203 00:35:55.828 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.828 issued rwts: total=1152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.828 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:55.828 filename1: (groupid=0, jobs=1): err= 0: pid=2419699: Thu Dec 5 14:07:27 2024 00:35:55.828 read: IOPS=108, BW=434KiB/s (444kB/s)(4352KiB/10026msec) 00:35:55.828 slat (nsec): min=6059, max=44758, avg=11976.34, stdev=6356.85 00:35:55.828 clat (usec): min=586, max=44492, avg=36820.45, stdev=12516.85 00:35:55.828 lat (usec): min=593, max=44536, avg=36832.43, stdev=12516.95 00:35:55.828 clat percentiles (usec): 00:35:55.828 | 1.00th=[ 594], 5.00th=[ 619], 10.00th=[ 685], 20.00th=[40633], 00:35:55.828 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:55.828 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:35:55.828 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:35:55.828 | 99.99th=[44303] 00:35:55.828 bw ( KiB/s): min= 384, max= 512, per=48.45%, avg=433.60, stdev=36.67, samples=20 00:35:55.828 iops : min= 96, max= 128, avg=108.40, stdev= 9.17, samples=20 00:35:55.828 lat (usec) : 750=10.66% 00:35:55.828 lat (msec) : 50=89.34% 00:35:55.828 cpu : usr=97.09%, sys=2.62%, ctx=14, majf=0, minf=111 00:35:55.828 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.828 issued rwts: total=1088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.828 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:55.828 00:35:55.828 Run status group 0 (all jobs): 00:35:55.828 READ: bw=894KiB/s (915kB/s), 434KiB/s-460KiB/s (444kB/s-471kB/s), io=8960KiB (9175kB), run=10020-10026msec 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.087 00:35:56.087 real 0m11.508s 00:35:56.087 user 0m20.934s 00:35:56.087 sys 0m0.801s 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.087 14:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:56.087 ************************************ 00:35:56.087 END TEST fio_dif_1_multi_subsystems 00:35:56.087 ************************************ 00:35:56.087 14:07:27 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:56.087 14:07:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:56.087 14:07:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.087 14:07:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:56.087 ************************************ 00:35:56.087 START TEST fio_dif_rand_params 00:35:56.087 ************************************ 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.087 bdev_null0 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.087 [2024-12-05 14:07:27.508601] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:56.087 { 00:35:56.087 "params": { 00:35:56.087 "name": "Nvme$subsystem", 00:35:56.087 "trtype": "$TEST_TRANSPORT", 00:35:56.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:56.087 "adrfam": "ipv4", 00:35:56.087 "trsvcid": "$NVMF_PORT", 00:35:56.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:56.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:56.087 "hdgst": ${hdgst:-false}, 00:35:56.087 "ddgst": ${ddgst:-false} 00:35:56.087 }, 00:35:56.087 "method": "bdev_nvme_attach_controller" 00:35:56.087 } 00:35:56.087 EOF 00:35:56.087 )") 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:56.087 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:56.088 "params": { 00:35:56.088 "name": "Nvme0", 00:35:56.088 "trtype": "tcp", 00:35:56.088 "traddr": "10.0.0.2", 00:35:56.088 "adrfam": "ipv4", 00:35:56.088 "trsvcid": "4420", 00:35:56.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:56.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:56.088 "hdgst": false, 00:35:56.088 "ddgst": false 00:35:56.088 }, 00:35:56.088 "method": "bdev_nvme_attach_controller" 00:35:56.088 }' 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:56.088 14:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:56.347 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:56.347 ... 00:35:56.347 fio-3.35 00:35:56.347 Starting 3 threads 00:36:02.940 00:36:02.940 filename0: (groupid=0, jobs=1): err= 0: pid=2421098: Thu Dec 5 14:07:33 2024 00:36:02.940 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(133MiB/5045msec) 00:36:02.940 slat (nsec): min=4570, max=50142, avg=16261.80, stdev=5867.39 00:36:02.940 clat (usec): min=5458, max=54945, avg=14138.62, stdev=7826.43 00:36:02.940 lat (usec): min=5465, max=54972, avg=14154.89, stdev=7826.23 00:36:02.940 clat percentiles (usec): 00:36:02.940 | 1.00th=[ 8225], 5.00th=[ 9241], 10.00th=[10814], 20.00th=[11731], 00:36:02.940 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13173], 00:36:02.940 | 70.00th=[13435], 80.00th=[13960], 90.00th=[14615], 95.00th=[16188], 00:36:02.940 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:36:02.940 | 99.99th=[54789] 00:36:02.940 bw ( KiB/s): min=20736, max=31744, per=34.08%, avg=27217.60, stdev=3071.43, samples=10 00:36:02.940 iops : min= 162, max= 248, avg=212.60, stdev=24.04, samples=10 00:36:02.940 lat (msec) : 10=8.26%, 20=87.90%, 50=0.66%, 100=3.19% 00:36:02.940 cpu : usr=94.43%, sys=5.08%, ctx=14, majf=0, minf=112 00:36:02.940 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.940 issued rwts: total=1066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.940 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:02.940 filename0: (groupid=0, jobs=1): err= 0: pid=2421099: Thu Dec 5 14:07:33 2024 00:36:02.940 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(135MiB/5045msec) 00:36:02.940 slat (nsec): min=4724, max=57230, avg=18981.24, stdev=4936.17 00:36:02.940 clat (usec): min=5007, max=54107, avg=13952.57, stdev=4233.24 00:36:02.940 lat (usec): min=5019, max=54119, avg=13971.55, stdev=4233.34 00:36:02.940 clat percentiles (usec): 00:36:02.940 | 1.00th=[ 5473], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[11338], 00:36:02.940 | 30.00th=[13042], 40.00th=[13566], 50.00th=[14091], 60.00th=[14484], 00:36:02.940 | 70.00th=[15008], 80.00th=[15795], 90.00th=[17171], 95.00th=[17957], 00:36:02.940 | 99.00th=[24511], 99.50th=[46924], 99.90th=[54264], 99.95th=[54264], 00:36:02.940 | 99.99th=[54264] 00:36:02.940 bw ( KiB/s): min=25856, max=30976, per=34.53%, avg=27577.20, stdev=1673.29, samples=10 00:36:02.940 iops : min= 202, max= 242, avg=215.40, stdev=13.00, samples=10 00:36:02.940 lat (msec) : 10=13.15%, 20=85.56%, 50=0.93%, 100=0.37% 00:36:02.940 cpu : usr=95.18%, sys=4.28%, ctx=38, majf=0, minf=144 00:36:02.940 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.940 issued rwts: total=1080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.940 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:02.940 filename0: (groupid=0, jobs=1): err= 0: pid=2421100: Thu Dec 5 14:07:33 2024 00:36:02.940 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(125MiB/5006msec) 00:36:02.940 slat (nsec): min=4633, max=59527, avg=16725.13, stdev=5802.36 00:36:02.940 clat (usec): min=5292, max=96518, avg=14963.20, stdev=7430.96 00:36:02.940 lat (usec): min=5300, max=96531, avg=14979.92, stdev=7430.19 00:36:02.940 clat percentiles (usec): 00:36:02.940 | 1.00th=[ 7767], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[12256], 00:36:02.940 | 30.00th=[13042], 40.00th=[13566], 50.00th=[13960], 60.00th=[14484], 00:36:02.940 | 70.00th=[15139], 80.00th=[16188], 90.00th=[17433], 95.00th=[18482], 00:36:02.940 | 99.00th=[54264], 99.50th=[55313], 99.90th=[57410], 99.95th=[96994], 00:36:02.940 | 99.99th=[96994] 00:36:02.940 bw ( KiB/s): min=16128, max=30464, per=32.02%, avg=25574.40, stdev=3892.64, samples=10 00:36:02.940 iops : min= 126, max= 238, avg=199.80, stdev=30.41, samples=10 00:36:02.940 lat (msec) : 10=11.68%, 20=85.43%, 100=2.89% 00:36:02.940 cpu : usr=94.33%, sys=5.15%, ctx=12, majf=0, minf=206 00:36:02.940 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.940 issued rwts: total=1002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.940 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:02.940 00:36:02.940 Run status group 0 (all jobs): 00:36:02.940 READ: bw=78.0MiB/s (81.8MB/s), 25.0MiB/s-26.8MiB/s (26.2MB/s-28.1MB/s), io=394MiB (413MB), run=5006-5045msec 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.940 bdev_null0 00:36:02.940 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.941 [2024-12-05 14:07:33.869682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.941 bdev_null1 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.941 bdev_null2 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:02.941 { 00:36:02.941 "params": { 00:36:02.941 "name": "Nvme$subsystem", 00:36:02.941 "trtype": "$TEST_TRANSPORT", 00:36:02.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.941 "adrfam": "ipv4", 00:36:02.941 "trsvcid": "$NVMF_PORT", 00:36:02.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.941 "hdgst": ${hdgst:-false}, 00:36:02.941 "ddgst": ${ddgst:-false} 00:36:02.941 }, 00:36:02.941 "method": "bdev_nvme_attach_controller" 00:36:02.941 } 00:36:02.941 EOF 00:36:02.941 )") 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:02.941 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:02.942 { 00:36:02.942 "params": { 00:36:02.942 "name": "Nvme$subsystem", 00:36:02.942 "trtype": "$TEST_TRANSPORT", 00:36:02.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.942 "adrfam": "ipv4", 00:36:02.942 "trsvcid": "$NVMF_PORT", 00:36:02.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.942 "hdgst": ${hdgst:-false}, 00:36:02.942 "ddgst": ${ddgst:-false} 00:36:02.942 }, 00:36:02.942 "method": "bdev_nvme_attach_controller" 00:36:02.942 } 00:36:02.942 EOF 00:36:02.942 )") 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:02.942 { 00:36:02.942 "params": { 00:36:02.942 "name": "Nvme$subsystem", 00:36:02.942 "trtype": "$TEST_TRANSPORT", 00:36:02.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.942 "adrfam": "ipv4", 00:36:02.942 "trsvcid": "$NVMF_PORT", 00:36:02.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.942 "hdgst": ${hdgst:-false}, 00:36:02.942 "ddgst": ${ddgst:-false} 00:36:02.942 }, 00:36:02.942 "method": "bdev_nvme_attach_controller" 00:36:02.942 } 00:36:02.942 EOF 00:36:02.942 )") 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:02.942 "params": { 00:36:02.942 "name": "Nvme0", 00:36:02.942 "trtype": "tcp", 00:36:02.942 "traddr": "10.0.0.2", 00:36:02.942 "adrfam": "ipv4", 00:36:02.942 "trsvcid": "4420", 00:36:02.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:02.942 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:02.942 "hdgst": false, 00:36:02.942 "ddgst": false 00:36:02.942 }, 00:36:02.942 "method": "bdev_nvme_attach_controller" 00:36:02.942 },{ 00:36:02.942 "params": { 00:36:02.942 "name": "Nvme1", 00:36:02.942 "trtype": "tcp", 00:36:02.942 "traddr": "10.0.0.2", 00:36:02.942 "adrfam": "ipv4", 00:36:02.942 "trsvcid": "4420", 00:36:02.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:02.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:02.942 "hdgst": false, 00:36:02.942 "ddgst": false 00:36:02.942 }, 00:36:02.942 "method": "bdev_nvme_attach_controller" 00:36:02.942 },{ 00:36:02.942 "params": { 00:36:02.942 "name": "Nvme2", 00:36:02.942 "trtype": "tcp", 00:36:02.942 "traddr": "10.0.0.2", 00:36:02.942 "adrfam": "ipv4", 00:36:02.942 "trsvcid": "4420", 00:36:02.942 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:02.942 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:02.942 "hdgst": false, 00:36:02.942 "ddgst": false 00:36:02.942 }, 00:36:02.942 "method": "bdev_nvme_attach_controller" 00:36:02.942 }' 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:02.942 14:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:02.942 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:02.942 ... 00:36:02.942 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:02.942 ... 00:36:02.942 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:02.942 ... 00:36:02.942 fio-3.35 00:36:02.942 Starting 24 threads 00:36:15.149 00:36:15.149 filename0: (groupid=0, jobs=1): err= 0: pid=2421963: Thu Dec 5 14:07:45 2024 00:36:15.149 read: IOPS=56, BW=227KiB/s (232kB/s)(2296KiB/10119msec) 00:36:15.149 slat (nsec): min=8322, max=92529, avg=30704.16, stdev=14877.83 00:36:15.149 clat (msec): min=160, max=417, avg=281.57, stdev=45.04 00:36:15.149 lat (msec): min=160, max=417, avg=281.61, stdev=45.03 00:36:15.149 clat percentiles (msec): 00:36:15.149 | 1.00th=[ 161], 5.00th=[ 188], 10.00th=[ 207], 20.00th=[ 247], 00:36:15.149 | 30.00th=[ 271], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 296], 00:36:15.149 | 70.00th=[ 309], 80.00th=[ 321], 90.00th=[ 330], 95.00th=[ 334], 00:36:15.149 | 99.00th=[ 363], 99.50th=[ 376], 99.90th=[ 418], 99.95th=[ 418], 00:36:15.149 | 99.99th=[ 418] 00:36:15.149 bw ( KiB/s): min= 128, max= 384, per=3.55%, avg=223.20, stdev=70.12, samples=20 00:36:15.149 iops : min= 32, max= 96, avg=55.80, stdev=17.53, samples=20 00:36:15.149 lat (msec) : 250=25.09%, 500=74.91% 00:36:15.149 cpu : usr=98.10%, sys=1.35%, ctx=206, majf=0, minf=9 00:36:15.149 IO depths : 1=5.9%, 2=12.2%, 4=25.1%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:15.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.149 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.149 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.149 filename0: (groupid=0, jobs=1): err= 0: pid=2421964: Thu Dec 5 14:07:45 2024 00:36:15.149 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10141msec) 00:36:15.149 slat (usec): min=8, max=113, avg=49.13, stdev=29.21 00:36:15.149 clat (msec): min=49, max=386, avg=240.35, stdev=60.10 00:36:15.149 lat (msec): min=49, max=386, avg=240.39, stdev=60.11 00:36:15.149 clat percentiles (msec): 00:36:15.149 | 1.00th=[ 50], 5.00th=[ 123], 10.00th=[ 188], 20.00th=[ 203], 00:36:15.149 | 30.00th=[ 211], 40.00th=[ 228], 50.00th=[ 236], 60.00th=[ 279], 00:36:15.149 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 309], 00:36:15.149 | 99.00th=[ 368], 99.50th=[ 384], 99.90th=[ 388], 99.95th=[ 388], 00:36:15.149 | 99.99th=[ 388] 00:36:15.149 bw ( KiB/s): min= 128, max= 496, per=4.17%, avg=262.40, stdev=70.87, samples=20 00:36:15.149 iops : min= 32, max= 124, avg=65.60, stdev=17.72, samples=20 00:36:15.149 lat (msec) : 50=2.38%, 100=2.38%, 250=51.79%, 500=43.45% 00:36:15.149 cpu : usr=98.01%, sys=1.45%, ctx=36, majf=0, minf=9 00:36:15.149 IO depths : 1=1.6%, 2=7.1%, 4=22.8%, 8=57.6%, 16=10.9%, 32=0.0%, >=64=0.0% 00:36:15.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.149 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.149 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.149 filename0: (groupid=0, jobs=1): err= 0: pid=2421965: Thu Dec 5 14:07:45 2024 00:36:15.149 read: IOPS=56, BW=227KiB/s (232kB/s)(2296KiB/10120msec) 00:36:15.149 slat (usec): min=11, max=112, avg=32.86, stdev=17.31 00:36:15.149 clat (msec): min=160, max=362, avg=281.63, stdev=43.45 00:36:15.149 lat (msec): min=160, max=362, avg=281.66, stdev=43.45 00:36:15.149 clat percentiles (msec): 00:36:15.149 | 1.00th=[ 161], 5.00th=[ 199], 10.00th=[ 207], 20.00th=[ 249], 00:36:15.149 | 30.00th=[ 275], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 296], 00:36:15.149 | 70.00th=[ 309], 80.00th=[ 321], 90.00th=[ 330], 95.00th=[ 334], 00:36:15.149 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:36:15.149 | 99.99th=[ 363] 00:36:15.149 bw ( KiB/s): min= 128, max= 368, per=3.55%, avg=223.20, stdev=61.53, samples=20 00:36:15.149 iops : min= 32, max= 92, avg=55.80, stdev=15.38, samples=20 00:36:15.149 lat (msec) : 250=24.04%, 500=75.96% 00:36:15.149 cpu : usr=97.50%, sys=1.68%, ctx=113, majf=0, minf=9 00:36:15.149 IO depths : 1=0.3%, 2=6.6%, 4=25.1%, 8=55.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:36:15.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.149 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.149 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.149 filename0: (groupid=0, jobs=1): err= 0: pid=2421966: Thu Dec 5 14:07:45 2024 00:36:15.149 read: IOPS=56, BW=227KiB/s (232kB/s)(2296KiB/10125msec) 00:36:15.149 slat (nsec): min=4073, max=94482, avg=64894.09, stdev=15249.75 00:36:15.149 clat (msec): min=148, max=418, avg=281.43, stdev=48.04 00:36:15.149 lat (msec): min=148, max=418, avg=281.50, stdev=48.04 00:36:15.149 clat percentiles (msec): 00:36:15.149 | 1.00th=[ 161], 5.00th=[ 188], 10.00th=[ 207], 20.00th=[ 239], 00:36:15.149 | 30.00th=[ 271], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 292], 00:36:15.150 | 70.00th=[ 305], 80.00th=[ 321], 90.00th=[ 330], 95.00th=[ 334], 00:36:15.150 | 99.00th=[ 414], 99.50th=[ 418], 99.90th=[ 418], 99.95th=[ 418], 00:36:15.150 | 99.99th=[ 418] 00:36:15.150 bw ( KiB/s): min= 128, max= 384, per=3.55%, avg=223.20, stdev=70.12, samples=20 00:36:15.150 iops : min= 32, max= 96, avg=55.80, stdev=17.53, samples=20 00:36:15.150 lat (msec) : 250=26.48%, 500=73.52% 00:36:15.150 cpu : usr=97.94%, sys=1.55%, ctx=14, majf=0, minf=9 00:36:15.150 IO depths : 1=5.1%, 2=11.3%, 4=25.1%, 8=51.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:15.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.150 filename0: (groupid=0, jobs=1): err= 0: pid=2421967: Thu Dec 5 14:07:45 2024 00:36:15.150 read: IOPS=66, BW=268KiB/s (274kB/s)(2712KiB/10137msec) 00:36:15.150 slat (nsec): min=6860, max=96199, avg=48620.14, stdev=26129.83 00:36:15.150 clat (msec): min=50, max=374, avg=237.75, stdev=62.37 00:36:15.150 lat (msec): min=50, max=374, avg=237.80, stdev=62.38 00:36:15.150 clat percentiles (msec): 00:36:15.150 | 1.00th=[ 51], 5.00th=[ 126], 10.00th=[ 180], 20.00th=[ 190], 00:36:15.150 | 30.00th=[ 209], 40.00th=[ 220], 50.00th=[ 236], 60.00th=[ 279], 00:36:15.150 | 70.00th=[ 284], 80.00th=[ 292], 90.00th=[ 309], 95.00th=[ 317], 00:36:15.150 | 99.00th=[ 321], 99.50th=[ 372], 99.90th=[ 376], 99.95th=[ 376], 00:36:15.150 | 99.99th=[ 376] 00:36:15.150 bw ( KiB/s): min= 128, max= 512, per=4.20%, avg=264.80, stdev=95.64, samples=20 00:36:15.150 iops : min= 32, max= 128, avg=66.20, stdev=23.91, samples=20 00:36:15.150 lat (msec) : 100=4.72%, 250=50.88%, 500=44.40% 00:36:15.150 cpu : usr=98.06%, sys=1.38%, ctx=43, majf=0, minf=9 00:36:15.150 IO depths : 1=3.4%, 2=9.1%, 4=23.5%, 8=54.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:36:15.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.150 filename0: (groupid=0, jobs=1): err= 0: pid=2421968: Thu Dec 5 14:07:45 2024 00:36:15.150 read: IOPS=56, BW=228KiB/s (233kB/s)(2304KiB/10114msec) 00:36:15.150 slat (nsec): min=8321, max=83994, avg=23940.25, stdev=15311.70 00:36:15.150 clat (msec): min=143, max=361, avg=280.70, stdev=45.13 00:36:15.150 lat (msec): min=143, max=361, avg=280.73, stdev=45.13 00:36:15.150 clat percentiles (msec): 00:36:15.150 | 1.00th=[ 144], 5.00th=[ 188], 10.00th=[ 207], 20.00th=[ 247], 00:36:15.150 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 296], 00:36:15.150 | 70.00th=[ 309], 80.00th=[ 317], 90.00th=[ 330], 95.00th=[ 334], 00:36:15.150 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:36:15.150 | 99.99th=[ 363] 00:36:15.150 bw ( KiB/s): min= 128, max= 384, per=3.55%, avg=224.00, stdev=70.42, samples=20 00:36:15.150 iops : min= 32, max= 96, avg=56.00, stdev=17.60, samples=20 00:36:15.150 lat (msec) : 250=21.88%, 500=78.12% 00:36:15.150 cpu : usr=98.15%, sys=1.44%, ctx=17, majf=0, minf=9 00:36:15.150 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:15.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.150 filename0: (groupid=0, jobs=1): err= 0: pid=2421969: Thu Dec 5 14:07:45 2024 00:36:15.150 read: IOPS=58, BW=235KiB/s (241kB/s)(2368KiB/10063msec) 00:36:15.150 slat (usec): min=4, max=106, avg=47.35, stdev=25.11 00:36:15.150 clat (msec): min=174, max=410, avg=271.54, stdev=47.83 00:36:15.150 lat (msec): min=174, max=410, avg=271.59, stdev=47.85 00:36:15.150 clat percentiles (msec): 00:36:15.150 | 1.00th=[ 186], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 224], 00:36:15.150 | 30.00th=[ 236], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 292], 00:36:15.150 | 70.00th=[ 300], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 334], 00:36:15.150 | 99.00th=[ 368], 99.50th=[ 393], 99.90th=[ 409], 99.95th=[ 409], 00:36:15.150 | 99.99th=[ 409] 00:36:15.150 bw ( KiB/s): min= 128, max= 384, per=3.66%, avg=230.40, stdev=65.54, samples=20 00:36:15.150 iops : min= 32, max= 96, avg=57.60, stdev=16.38, samples=20 00:36:15.150 lat (msec) : 250=32.60%, 500=67.40% 00:36:15.150 cpu : usr=98.08%, sys=1.32%, ctx=29, majf=0, minf=9 00:36:15.150 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:36:15.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.150 filename0: (groupid=0, jobs=1): err= 0: pid=2421970: Thu Dec 5 14:07:45 2024 00:36:15.150 read: IOPS=66, BW=268KiB/s (274kB/s)(2712KiB/10123msec) 00:36:15.150 slat (nsec): min=4540, max=61077, avg=17790.67, stdev=10495.13 00:36:15.150 clat (msec): min=133, max=399, avg=238.02, stdev=39.09 00:36:15.150 lat (msec): min=133, max=399, avg=238.04, stdev=39.09 00:36:15.150 clat percentiles (msec): 00:36:15.150 | 1.00th=[ 159], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 205], 00:36:15.150 | 30.00th=[ 213], 40.00th=[ 218], 50.00th=[ 224], 60.00th=[ 236], 00:36:15.150 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 296], 95.00th=[ 300], 00:36:15.150 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 401], 99.95th=[ 401], 00:36:15.150 | 99.99th=[ 401] 00:36:15.150 bw ( KiB/s): min= 144, max= 368, per=4.20%, avg=264.80, stdev=54.56, samples=20 00:36:15.150 iops : min= 36, max= 92, avg=66.20, stdev=13.64, samples=20 00:36:15.150 lat (msec) : 250=64.01%, 500=35.99% 00:36:15.150 cpu : usr=98.38%, sys=1.22%, ctx=20, majf=0, minf=9 00:36:15.150 IO depths : 1=0.6%, 2=4.3%, 4=17.3%, 8=65.9%, 16=11.9%, 32=0.0%, >=64=0.0% 00:36:15.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 complete : 0=0.0%, 4=92.0%, 8=2.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.150 filename1: (groupid=0, jobs=1): err= 0: pid=2421971: Thu Dec 5 14:07:45 2024 00:36:15.150 read: IOPS=57, BW=229KiB/s (235kB/s)(2304KiB/10054msec) 00:36:15.150 slat (nsec): min=8350, max=88726, avg=29285.39, stdev=18363.52 00:36:15.150 clat (msec): min=165, max=454, avg=279.00, stdev=51.13 00:36:15.150 lat (msec): min=165, max=454, avg=279.03, stdev=51.12 00:36:15.150 clat percentiles (msec): 00:36:15.150 | 1.00th=[ 165], 5.00th=[ 178], 10.00th=[ 199], 20.00th=[ 236], 00:36:15.150 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 296], 00:36:15.150 | 70.00th=[ 309], 80.00th=[ 321], 90.00th=[ 330], 95.00th=[ 338], 00:36:15.150 | 99.00th=[ 384], 99.50th=[ 451], 99.90th=[ 456], 99.95th=[ 456], 00:36:15.150 | 99.99th=[ 456] 00:36:15.150 bw ( KiB/s): min= 128, max= 384, per=3.55%, avg=224.00, stdev=66.28, samples=20 00:36:15.150 iops : min= 32, max= 96, avg=56.00, stdev=16.57, samples=20 00:36:15.150 lat (msec) : 250=29.51%, 500=70.49% 00:36:15.150 cpu : usr=98.40%, sys=1.15%, ctx=39, majf=0, minf=10 00:36:15.150 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:15.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.150 filename1: (groupid=0, jobs=1): err= 0: pid=2421972: Thu Dec 5 14:07:45 2024 00:36:15.150 read: IOPS=71, BW=286KiB/s (293kB/s)(2896KiB/10138msec) 00:36:15.150 slat (usec): min=8, max=113, avg=34.84, stdev=29.51 00:36:15.150 clat (msec): min=50, max=410, avg=222.68, stdev=58.42 00:36:15.150 lat (msec): min=50, max=410, avg=222.71, stdev=58.44 00:36:15.150 clat percentiles (msec): 00:36:15.150 | 1.00th=[ 51], 5.00th=[ 148], 10.00th=[ 159], 20.00th=[ 188], 00:36:15.150 | 30.00th=[ 201], 40.00th=[ 209], 50.00th=[ 213], 60.00th=[ 222], 00:36:15.150 | 70.00th=[ 264], 80.00th=[ 284], 90.00th=[ 288], 95.00th=[ 300], 00:36:15.150 | 99.00th=[ 372], 99.50th=[ 388], 99.90th=[ 409], 99.95th=[ 409], 00:36:15.150 | 99.99th=[ 409] 00:36:15.150 bw ( KiB/s): min= 128, max= 496, per=4.50%, avg=283.20, stdev=83.56, samples=20 00:36:15.150 iops : min= 32, max= 124, avg=70.80, stdev=20.89, samples=20 00:36:15.150 lat (msec) : 100=4.14%, 250=64.09%, 500=31.77% 00:36:15.150 cpu : usr=98.12%, sys=1.33%, ctx=60, majf=0, minf=10 00:36:15.150 IO depths : 1=1.4%, 2=5.1%, 4=17.1%, 8=65.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:36:15.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 complete : 0=0.0%, 4=91.8%, 8=2.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 issued rwts: total=724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.150 filename1: (groupid=0, jobs=1): err= 0: pid=2421973: Thu Dec 5 14:07:45 2024 00:36:15.150 read: IOPS=73, BW=294KiB/s (302kB/s)(2984KiB/10134msec) 00:36:15.150 slat (nsec): min=8055, max=75200, avg=17358.24, stdev=13831.89 00:36:15.150 clat (msec): min=133, max=357, avg=216.40, stdev=43.99 00:36:15.150 lat (msec): min=133, max=357, avg=216.42, stdev=43.99 00:36:15.150 clat percentiles (msec): 00:36:15.150 | 1.00th=[ 142], 5.00th=[ 153], 10.00th=[ 167], 20.00th=[ 182], 00:36:15.150 | 30.00th=[ 192], 40.00th=[ 201], 50.00th=[ 207], 60.00th=[ 224], 00:36:15.150 | 70.00th=[ 232], 80.00th=[ 247], 90.00th=[ 279], 95.00th=[ 313], 00:36:15.150 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 359], 99.95th=[ 359], 00:36:15.150 | 99.99th=[ 359] 00:36:15.150 bw ( KiB/s): min= 144, max= 384, per=4.65%, avg=292.00, stdev=56.84, samples=20 00:36:15.150 iops : min= 36, max= 96, avg=73.00, stdev=14.21, samples=20 00:36:15.150 lat (msec) : 250=82.04%, 500=17.96% 00:36:15.150 cpu : usr=98.26%, sys=1.28%, ctx=29, majf=0, minf=9 00:36:15.150 IO depths : 1=0.9%, 2=3.1%, 4=11.9%, 8=72.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:36:15.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.150 issued rwts: total=746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.151 filename1: (groupid=0, jobs=1): err= 0: pid=2421974: Thu Dec 5 14:07:45 2024 00:36:15.151 read: IOPS=57, BW=229KiB/s (235kB/s)(2304KiB/10055msec) 00:36:15.151 slat (usec): min=18, max=116, avg=69.89, stdev=12.56 00:36:15.151 clat (msec): min=189, max=337, avg=278.68, stdev=40.23 00:36:15.151 lat (msec): min=189, max=337, avg=278.75, stdev=40.24 00:36:15.151 clat percentiles (msec): 00:36:15.151 | 1.00th=[ 190], 5.00th=[ 192], 10.00th=[ 215], 20.00th=[ 236], 00:36:15.151 | 30.00th=[ 275], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 300], 00:36:15.151 | 70.00th=[ 300], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 334], 00:36:15.151 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:36:15.151 | 99.99th=[ 338] 00:36:15.151 bw ( KiB/s): min= 128, max= 384, per=3.55%, avg=224.00, stdev=70.42, samples=20 00:36:15.151 iops : min= 32, max= 96, avg=56.00, stdev=17.60, samples=20 00:36:15.151 lat (msec) : 250=25.00%, 500=75.00% 00:36:15.151 cpu : usr=97.39%, sys=1.67%, ctx=140, majf=0, minf=9 00:36:15.151 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:15.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.151 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.151 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.151 filename1: (groupid=0, jobs=1): err= 0: pid=2421975: Thu Dec 5 14:07:45 2024 00:36:15.151 read: IOPS=56, BW=228KiB/s (233kB/s)(2304KiB/10119msec) 00:36:15.151 slat (nsec): min=18371, max=87740, avg=29698.26, stdev=8183.02 00:36:15.151 clat (msec): min=160, max=419, avg=280.79, stdev=46.91 00:36:15.151 lat (msec): min=160, max=419, avg=280.82, stdev=46.91 00:36:15.151 clat percentiles (msec): 00:36:15.151 | 1.00th=[ 161], 5.00th=[ 188], 10.00th=[ 207], 20.00th=[ 239], 00:36:15.151 | 30.00th=[ 271], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 296], 00:36:15.151 | 70.00th=[ 309], 80.00th=[ 321], 90.00th=[ 330], 95.00th=[ 334], 00:36:15.151 | 99.00th=[ 388], 99.50th=[ 401], 99.90th=[ 418], 99.95th=[ 418], 00:36:15.151 | 99.99th=[ 418] 00:36:15.151 bw ( KiB/s): min= 128, max= 368, per=3.55%, avg=224.00, stdev=67.88, samples=20 00:36:15.151 iops : min= 32, max= 92, avg=56.00, stdev=16.97, samples=20 00:36:15.151 lat (msec) : 250=26.04%, 500=73.96% 00:36:15.151 cpu : usr=98.16%, sys=1.35%, ctx=14, majf=0, minf=9 00:36:15.151 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:36:15.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.151 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.151 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.151 filename1: (groupid=0, jobs=1): err= 0: pid=2421976: Thu Dec 5 14:07:45 2024 00:36:15.151 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10121msec) 00:36:15.151 slat (usec): min=14, max=107, avg=45.60, stdev=23.19 00:36:15.151 clat (msec): min=132, max=332, avg=273.13, stdev=43.60 00:36:15.151 lat (msec): min=132, max=332, avg=273.18, stdev=43.62 00:36:15.151 clat percentiles (msec): 00:36:15.151 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 203], 20.00th=[ 224], 00:36:15.151 | 30.00th=[ 262], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 292], 00:36:15.151 | 70.00th=[ 296], 80.00th=[ 313], 90.00th=[ 321], 95.00th=[ 330], 00:36:15.151 | 99.00th=[ 334], 99.50th=[ 334], 99.90th=[ 334], 99.95th=[ 334], 00:36:15.151 | 99.99th=[ 334] 00:36:15.151 bw ( KiB/s): min= 128, max= 384, per=3.66%, avg=230.40, stdev=66.96, samples=20 00:36:15.151 iops : min= 32, max= 96, avg=57.60, stdev=16.74, samples=20 00:36:15.151 lat (msec) : 250=27.70%, 500=72.30% 00:36:15.151 cpu : usr=98.30%, sys=1.25%, ctx=28, majf=0, minf=9 00:36:15.151 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:15.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.151 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.151 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.151 filename1: (groupid=0, jobs=1): err= 0: pid=2421977: Thu Dec 5 14:07:45 2024 00:36:15.151 read: IOPS=73, BW=296KiB/s (303kB/s)(3000KiB/10136msec) 00:36:15.151 slat (usec): min=8, max=102, avg=53.15, stdev=26.15 00:36:15.151 clat (msec): min=49, max=372, avg=215.16, stdev=49.30 00:36:15.151 lat (msec): min=49, max=372, avg=215.22, stdev=49.31 00:36:15.151 clat percentiles (msec): 00:36:15.151 | 1.00th=[ 50], 5.00th=[ 148], 10.00th=[ 176], 20.00th=[ 192], 00:36:15.151 | 30.00th=[ 197], 40.00th=[ 205], 50.00th=[ 209], 60.00th=[ 215], 00:36:15.151 | 70.00th=[ 230], 80.00th=[ 259], 90.00th=[ 284], 95.00th=[ 288], 00:36:15.151 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 372], 99.95th=[ 372], 00:36:15.151 | 99.99th=[ 372] 00:36:15.151 bw ( KiB/s): min= 144, max= 513, per=4.66%, avg=293.65, stdev=71.01, samples=20 00:36:15.151 iops : min= 36, max= 128, avg=73.40, stdev=17.71, samples=20 00:36:15.151 lat (msec) : 50=2.13%, 100=2.13%, 250=75.47%, 500=20.27% 00:36:15.151 cpu : usr=97.91%, sys=1.51%, ctx=39, majf=0, minf=9 00:36:15.151 IO depths : 1=1.7%, 2=4.3%, 4=13.6%, 8=69.5%, 16=10.9%, 32=0.0%, >=64=0.0% 00:36:15.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.151 complete : 0=0.0%, 4=90.8%, 8=3.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.151 issued rwts: total=750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.151 filename1: (groupid=0, jobs=1): err= 0: pid=2421978: Thu Dec 5 14:07:45 2024 00:36:15.151 read: IOPS=81, BW=325KiB/s (333kB/s)(3296KiB/10137msec) 00:36:15.151 slat (nsec): min=6739, max=84281, avg=16388.35, stdev=16222.05 00:36:15.151 clat (msec): min=40, max=329, avg=195.45, stdev=46.64 00:36:15.151 lat (msec): min=40, max=329, avg=195.47, stdev=46.64 00:36:15.151 clat percentiles (msec): 00:36:15.151 | 1.00th=[ 51], 5.00th=[ 125], 10.00th=[ 148], 20.00th=[ 176], 00:36:15.151 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 197], 60.00th=[ 203], 00:36:15.151 | 70.00th=[ 211], 80.00th=[ 220], 90.00th=[ 243], 95.00th=[ 279], 00:36:15.151 | 99.00th=[ 321], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330], 00:36:15.151 | 99.99th=[ 330] 00:36:15.151 bw ( KiB/s): min= 224, max= 512, per=5.14%, avg=323.20, stdev=62.21, samples=20 00:36:15.151 iops : min= 56, max= 128, avg=80.80, stdev=15.55, samples=20 00:36:15.151 lat (msec) : 50=1.82%, 100=2.06%, 250=88.35%, 500=7.77% 00:36:15.151 cpu : usr=98.12%, sys=1.36%, ctx=47, majf=0, minf=9 00:36:15.151 IO depths : 1=0.4%, 2=2.1%, 4=10.7%, 8=74.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:36:15.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.151 complete : 0=0.0%, 4=90.0%, 8=5.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.151 issued rwts: total=824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.151 filename2: (groupid=0, jobs=1): err= 0: pid=2421979: Thu Dec 5 14:07:45 2024 00:36:15.151 read: IOPS=56, BW=228KiB/s (233kB/s)(2304KiB/10120msec) 00:36:15.151 slat (nsec): min=8918, max=54681, avg=26719.12, stdev=5634.74 00:36:15.151 clat (msec): min=147, max=420, avg=280.87, stdev=49.74 00:36:15.151 lat (msec): min=147, max=420, avg=280.90, stdev=49.73 00:36:15.151 clat percentiles (msec): 00:36:15.151 | 1.00th=[ 161], 5.00th=[ 188], 10.00th=[ 199], 20.00th=[ 239], 00:36:15.151 | 30.00th=[ 271], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 296], 00:36:15.151 | 70.00th=[ 309], 80.00th=[ 321], 90.00th=[ 330], 95.00th=[ 363], 00:36:15.151 | 99.00th=[ 405], 99.50th=[ 418], 99.90th=[ 422], 99.95th=[ 422], 00:36:15.151 | 99.99th=[ 422] 00:36:15.151 bw ( KiB/s): min= 128, max= 368, per=3.55%, avg=224.00, stdev=66.28, samples=20 00:36:15.151 iops : min= 32, max= 92, avg=56.00, stdev=16.57, samples=20 00:36:15.151 lat (msec) : 250=26.74%, 500=73.26% 00:36:15.151 cpu : usr=97.39%, sys=1.78%, ctx=50, majf=0, minf=9 00:36:15.151 IO depths : 1=4.0%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:36:15.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.151 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.151 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.151 filename2: (groupid=0, jobs=1): err= 0: pid=2421980: Thu Dec 5 14:07:45 2024 00:36:15.151 read: IOPS=71, BW=285KiB/s (292kB/s)(2880KiB/10117msec) 00:36:15.151 slat (nsec): min=8223, max=83228, avg=17815.74, stdev=10117.54 00:36:15.151 clat (msec): min=126, max=373, avg=223.94, stdev=40.31 00:36:15.151 lat (msec): min=126, max=373, avg=223.95, stdev=40.31 00:36:15.151 clat percentiles (msec): 00:36:15.151 | 1.00th=[ 136], 5.00th=[ 153], 10.00th=[ 186], 20.00th=[ 194], 00:36:15.151 | 30.00th=[ 203], 40.00th=[ 211], 50.00th=[ 215], 60.00th=[ 222], 00:36:15.151 | 70.00th=[ 232], 80.00th=[ 279], 90.00th=[ 284], 95.00th=[ 292], 00:36:15.151 | 99.00th=[ 296], 99.50th=[ 355], 99.90th=[ 376], 99.95th=[ 376], 00:36:15.151 | 99.99th=[ 376] 00:36:15.151 bw ( KiB/s): min= 128, max= 384, per=4.47%, avg=281.60, stdev=57.43, samples=20 00:36:15.151 iops : min= 32, max= 96, avg=70.40, stdev=14.36, samples=20 00:36:15.151 lat (msec) : 250=76.11%, 500=23.89% 00:36:15.151 cpu : usr=98.23%, sys=1.35%, ctx=18, majf=0, minf=9 00:36:15.151 IO depths : 1=1.8%, 2=6.0%, 4=18.8%, 8=62.8%, 16=10.7%, 32=0.0%, >=64=0.0% 00:36:15.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.151 complete : 0=0.0%, 4=92.4%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.151 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.151 filename2: (groupid=0, jobs=1): err= 0: pid=2421981: Thu Dec 5 14:07:45 2024 00:36:15.151 read: IOPS=80, BW=320KiB/s (328kB/s)(3248KiB/10137msec) 00:36:15.151 slat (nsec): min=8122, max=95417, avg=21309.46, stdev=20880.50 00:36:15.151 clat (msec): min=50, max=307, avg=198.85, stdev=36.44 00:36:15.151 lat (msec): min=50, max=307, avg=198.87, stdev=36.44 00:36:15.151 clat percentiles (msec): 00:36:15.151 | 1.00th=[ 51], 5.00th=[ 153], 10.00th=[ 176], 20.00th=[ 186], 00:36:15.151 | 30.00th=[ 190], 40.00th=[ 197], 50.00th=[ 201], 60.00th=[ 207], 00:36:15.151 | 70.00th=[ 213], 80.00th=[ 220], 90.00th=[ 232], 95.00th=[ 251], 00:36:15.151 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 309], 99.95th=[ 309], 00:36:15.151 | 99.99th=[ 309] 00:36:15.152 bw ( KiB/s): min= 256, max= 512, per=5.06%, avg=318.40, stdev=60.96, samples=20 00:36:15.152 iops : min= 64, max= 128, avg=79.60, stdev=15.24, samples=20 00:36:15.152 lat (msec) : 100=3.94%, 250=91.75%, 500=4.31% 00:36:15.152 cpu : usr=98.38%, sys=1.14%, ctx=23, majf=0, minf=9 00:36:15.152 IO depths : 1=1.4%, 2=4.1%, 4=14.3%, 8=69.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:36:15.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.152 complete : 0=0.0%, 4=91.0%, 8=3.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.152 issued rwts: total=812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.152 filename2: (groupid=0, jobs=1): err= 0: pid=2421982: Thu Dec 5 14:07:45 2024 00:36:15.152 read: IOPS=73, BW=296KiB/s (303kB/s)(3000KiB/10138msec) 00:36:15.152 slat (nsec): min=8153, max=98449, avg=33120.09, stdev=26892.14 00:36:15.152 clat (msec): min=50, max=366, avg=215.33, stdev=56.63 00:36:15.152 lat (msec): min=50, max=366, avg=215.36, stdev=56.64 00:36:15.152 clat percentiles (msec): 00:36:15.152 | 1.00th=[ 51], 5.00th=[ 144], 10.00th=[ 159], 20.00th=[ 178], 00:36:15.152 | 30.00th=[ 190], 40.00th=[ 203], 50.00th=[ 209], 60.00th=[ 213], 00:36:15.152 | 70.00th=[ 230], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 296], 00:36:15.152 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 368], 99.95th=[ 368], 00:36:15.152 | 99.99th=[ 368] 00:36:15.152 bw ( KiB/s): min= 128, max= 512, per=4.66%, avg=293.60, stdev=78.09, samples=20 00:36:15.152 iops : min= 32, max= 128, avg=73.40, stdev=19.52, samples=20 00:36:15.152 lat (msec) : 100=4.27%, 250=67.47%, 500=28.27% 00:36:15.152 cpu : usr=98.16%, sys=1.41%, ctx=16, majf=0, minf=9 00:36:15.152 IO depths : 1=2.4%, 2=6.1%, 4=17.2%, 8=64.0%, 16=10.3%, 32=0.0%, >=64=0.0% 00:36:15.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.152 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.152 issued rwts: total=750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.152 filename2: (groupid=0, jobs=1): err= 0: pid=2421983: Thu Dec 5 14:07:45 2024 00:36:15.152 read: IOPS=56, BW=228KiB/s (233kB/s)(2304KiB/10119msec) 00:36:15.152 slat (usec): min=8, max=121, avg=30.90, stdev=12.85 00:36:15.152 clat (msec): min=139, max=415, avg=280.81, stdev=50.89 00:36:15.152 lat (msec): min=139, max=415, avg=280.84, stdev=50.89 00:36:15.152 clat percentiles (msec): 00:36:15.152 | 1.00th=[ 161], 5.00th=[ 188], 10.00th=[ 199], 20.00th=[ 239], 00:36:15.152 | 30.00th=[ 271], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 296], 00:36:15.152 | 70.00th=[ 309], 80.00th=[ 321], 90.00th=[ 330], 95.00th=[ 363], 00:36:15.152 | 99.00th=[ 409], 99.50th=[ 414], 99.90th=[ 418], 99.95th=[ 418], 00:36:15.152 | 99.99th=[ 418] 00:36:15.152 bw ( KiB/s): min= 128, max= 368, per=3.55%, avg=224.00, stdev=69.26, samples=20 00:36:15.152 iops : min= 32, max= 92, avg=56.00, stdev=17.31, samples=20 00:36:15.152 lat (msec) : 250=27.43%, 500=72.57% 00:36:15.152 cpu : usr=97.61%, sys=1.62%, ctx=108, majf=0, minf=9 00:36:15.152 IO depths : 1=4.2%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:36:15.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.152 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.152 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.152 filename2: (groupid=0, jobs=1): err= 0: pid=2421984: Thu Dec 5 14:07:45 2024 00:36:15.152 read: IOPS=84, BW=336KiB/s (344kB/s)(3408KiB/10137msec) 00:36:15.152 slat (nsec): min=7459, max=92541, avg=47441.00, stdev=24917.11 00:36:15.152 clat (msec): min=50, max=328, avg=188.87, stdev=48.55 00:36:15.152 lat (msec): min=50, max=328, avg=188.91, stdev=48.54 00:36:15.152 clat percentiles (msec): 00:36:15.152 | 1.00th=[ 51], 5.00th=[ 121], 10.00th=[ 136], 20.00th=[ 157], 00:36:15.152 | 30.00th=[ 169], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 194], 00:36:15.152 | 70.00th=[ 205], 80.00th=[ 220], 90.00th=[ 249], 95.00th=[ 279], 00:36:15.152 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330], 00:36:15.152 | 99.99th=[ 330] 00:36:15.152 bw ( KiB/s): min= 224, max= 512, per=5.31%, avg=334.40, stdev=69.24, samples=20 00:36:15.152 iops : min= 56, max= 128, avg=83.60, stdev=17.31, samples=20 00:36:15.152 lat (msec) : 100=4.23%, 250=87.09%, 500=8.69% 00:36:15.152 cpu : usr=98.15%, sys=1.39%, ctx=8, majf=0, minf=9 00:36:15.152 IO depths : 1=0.2%, 2=0.7%, 4=7.0%, 8=79.3%, 16=12.7%, 32=0.0%, >=64=0.0% 00:36:15.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.152 complete : 0=0.0%, 4=88.9%, 8=6.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.152 issued rwts: total=852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.152 filename2: (groupid=0, jobs=1): err= 0: pid=2421985: Thu Dec 5 14:07:45 2024 00:36:15.152 read: IOPS=57, BW=229KiB/s (235kB/s)(2304KiB/10054msec) 00:36:15.152 slat (nsec): min=5764, max=93804, avg=27856.46, stdev=17599.40 00:36:15.152 clat (msec): min=133, max=454, avg=279.07, stdev=54.02 00:36:15.152 lat (msec): min=133, max=454, avg=279.09, stdev=54.02 00:36:15.152 clat percentiles (msec): 00:36:15.152 | 1.00th=[ 159], 5.00th=[ 190], 10.00th=[ 197], 20.00th=[ 224], 00:36:15.152 | 30.00th=[ 264], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 300], 00:36:15.152 | 70.00th=[ 300], 80.00th=[ 321], 90.00th=[ 330], 95.00th=[ 338], 00:36:15.152 | 99.00th=[ 451], 99.50th=[ 456], 99.90th=[ 456], 99.95th=[ 456], 00:36:15.152 | 99.99th=[ 456] 00:36:15.152 bw ( KiB/s): min= 128, max= 368, per=3.55%, avg=224.00, stdev=64.63, samples=20 00:36:15.152 iops : min= 32, max= 92, avg=56.00, stdev=16.16, samples=20 00:36:15.152 lat (msec) : 250=27.95%, 500=72.05% 00:36:15.152 cpu : usr=98.28%, sys=1.31%, ctx=23, majf=0, minf=9 00:36:15.152 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:15.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.152 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.152 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.152 filename2: (groupid=0, jobs=1): err= 0: pid=2421986: Thu Dec 5 14:07:45 2024 00:36:15.152 read: IOPS=77, BW=312KiB/s (319kB/s)(3160KiB/10132msec) 00:36:15.152 slat (nsec): min=4427, max=48591, avg=13289.68, stdev=7547.31 00:36:15.152 clat (msec): min=113, max=326, avg=204.32, stdev=37.23 00:36:15.152 lat (msec): min=113, max=326, avg=204.34, stdev=37.23 00:36:15.152 clat percentiles (msec): 00:36:15.152 | 1.00th=[ 114], 5.00th=[ 144], 10.00th=[ 163], 20.00th=[ 182], 00:36:15.152 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 203], 60.00th=[ 207], 00:36:15.152 | 70.00th=[ 215], 80.00th=[ 228], 90.00th=[ 243], 95.00th=[ 279], 00:36:15.152 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 326], 00:36:15.152 | 99.99th=[ 326] 00:36:15.152 bw ( KiB/s): min= 224, max= 384, per=4.92%, avg=309.60, stdev=39.29, samples=20 00:36:15.152 iops : min= 56, max= 96, avg=77.40, stdev= 9.82, samples=20 00:36:15.152 lat (msec) : 250=91.65%, 500=8.35% 00:36:15.152 cpu : usr=98.26%, sys=1.37%, ctx=14, majf=0, minf=9 00:36:15.152 IO depths : 1=0.6%, 2=2.0%, 4=10.0%, 8=75.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:36:15.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.152 complete : 0=0.0%, 4=89.7%, 8=5.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.152 issued rwts: total=790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.152 00:36:15.152 Run status group 0 (all jobs): 00:36:15.152 READ: bw=6285KiB/s (6436kB/s), 227KiB/s-336KiB/s (232kB/s-344kB/s), io=62.2MiB (65.3MB), run=10054-10141msec 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.152 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.153 bdev_null0 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.153 [2024-12-05 14:07:45.770326] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.153 bdev_null1 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:15.153 { 00:36:15.153 "params": { 00:36:15.153 "name": "Nvme$subsystem", 00:36:15.153 "trtype": "$TEST_TRANSPORT", 00:36:15.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.153 "adrfam": "ipv4", 00:36:15.153 "trsvcid": "$NVMF_PORT", 00:36:15.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.153 "hdgst": ${hdgst:-false}, 00:36:15.153 "ddgst": ${ddgst:-false} 00:36:15.153 }, 00:36:15.153 "method": "bdev_nvme_attach_controller" 00:36:15.153 } 00:36:15.153 EOF 00:36:15.153 )") 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:15.153 { 00:36:15.153 "params": { 00:36:15.153 "name": "Nvme$subsystem", 00:36:15.153 "trtype": "$TEST_TRANSPORT", 00:36:15.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.153 "adrfam": "ipv4", 00:36:15.153 "trsvcid": "$NVMF_PORT", 00:36:15.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.153 "hdgst": ${hdgst:-false}, 00:36:15.153 "ddgst": ${ddgst:-false} 00:36:15.153 }, 00:36:15.153 "method": "bdev_nvme_attach_controller" 00:36:15.153 } 00:36:15.153 EOF 00:36:15.153 )") 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:15.153 "params": { 00:36:15.153 "name": "Nvme0", 00:36:15.153 "trtype": "tcp", 00:36:15.153 "traddr": "10.0.0.2", 00:36:15.153 "adrfam": "ipv4", 00:36:15.153 "trsvcid": "4420", 00:36:15.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:15.153 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:15.153 "hdgst": false, 00:36:15.153 "ddgst": false 00:36:15.153 }, 00:36:15.153 "method": "bdev_nvme_attach_controller" 00:36:15.153 },{ 00:36:15.153 "params": { 00:36:15.153 "name": "Nvme1", 00:36:15.153 "trtype": "tcp", 00:36:15.153 "traddr": "10.0.0.2", 00:36:15.153 "adrfam": "ipv4", 00:36:15.153 "trsvcid": "4420", 00:36:15.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:15.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:15.153 "hdgst": false, 00:36:15.153 "ddgst": false 00:36:15.153 }, 00:36:15.153 "method": "bdev_nvme_attach_controller" 00:36:15.153 }' 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:15.153 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:15.154 14:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.154 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:15.154 ... 00:36:15.154 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:15.154 ... 00:36:15.154 fio-3.35 00:36:15.154 Starting 4 threads 00:36:20.413 00:36:20.413 filename0: (groupid=0, jobs=1): err= 0: pid=2423484: Thu Dec 5 14:07:51 2024 00:36:20.413 read: IOPS=1961, BW=15.3MiB/s (16.1MB/s)(76.6MiB/5003msec) 00:36:20.413 slat (nsec): min=7190, max=60076, avg=12499.13, stdev=6227.00 00:36:20.413 clat (usec): min=724, max=7711, avg=4034.63, stdev=592.23 00:36:20.413 lat (usec): min=737, max=7729, avg=4047.13, stdev=592.50 00:36:20.413 clat percentiles (usec): 00:36:20.413 | 1.00th=[ 2245], 5.00th=[ 3032], 10.00th=[ 3359], 20.00th=[ 3621], 00:36:20.413 | 30.00th=[ 3818], 40.00th=[ 3949], 50.00th=[ 4113], 60.00th=[ 4228], 00:36:20.413 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4817], 00:36:20.413 | 99.00th=[ 5669], 99.50th=[ 6063], 99.90th=[ 7439], 99.95th=[ 7504], 00:36:20.413 | 99.99th=[ 7701] 00:36:20.413 bw ( KiB/s): min=14544, max=16448, per=26.68%, avg=15686.40, stdev=559.58, samples=10 00:36:20.413 iops : min= 1818, max= 2056, avg=1960.80, stdev=69.95, samples=10 00:36:20.413 lat (usec) : 750=0.01%, 1000=0.01% 00:36:20.413 lat (msec) : 2=0.52%, 4=43.32%, 10=56.14% 00:36:20.413 cpu : usr=94.24%, sys=5.22%, ctx=10, majf=0, minf=55 00:36:20.413 IO depths : 1=0.7%, 2=14.9%, 4=57.6%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.413 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.413 issued rwts: total=9811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.413 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:20.413 filename0: (groupid=0, jobs=1): err= 0: pid=2423485: Thu Dec 5 14:07:51 2024 00:36:20.413 read: IOPS=1841, BW=14.4MiB/s (15.1MB/s)(72.0MiB/5002msec) 00:36:20.413 slat (nsec): min=6872, max=90832, avg=17093.13, stdev=8798.18 00:36:20.413 clat (usec): min=854, max=7882, avg=4283.96, stdev=697.71 00:36:20.413 lat (usec): min=871, max=7903, avg=4301.06, stdev=697.68 00:36:20.413 clat percentiles (usec): 00:36:20.413 | 1.00th=[ 2540], 5.00th=[ 3359], 10.00th=[ 3589], 20.00th=[ 3851], 00:36:20.413 | 30.00th=[ 4047], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:20.413 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5080], 95.00th=[ 5604], 00:36:20.413 | 99.00th=[ 6783], 99.50th=[ 7111], 99.90th=[ 7701], 99.95th=[ 7767], 00:36:20.413 | 99.99th=[ 7898] 00:36:20.413 bw ( KiB/s): min=14412, max=15008, per=25.05%, avg=14732.40, stdev=189.43, samples=10 00:36:20.413 iops : min= 1801, max= 1876, avg=1841.50, stdev=23.77, samples=10 00:36:20.413 lat (usec) : 1000=0.03% 00:36:20.413 lat (msec) : 2=0.45%, 4=26.69%, 10=72.84% 00:36:20.413 cpu : usr=92.52%, sys=5.62%, ctx=129, majf=0, minf=53 00:36:20.413 IO depths : 1=0.2%, 2=14.1%, 4=58.0%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.413 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.413 issued rwts: total=9211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.413 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:20.413 filename1: (groupid=0, jobs=1): err= 0: pid=2423486: Thu Dec 5 14:07:51 2024 00:36:20.413 read: IOPS=1748, BW=13.7MiB/s (14.3MB/s)(68.3MiB/5001msec) 00:36:20.413 slat (nsec): min=7347, max=77077, avg=15143.31, stdev=8180.32 00:36:20.413 clat (usec): min=732, max=8097, avg=4524.41, stdev=785.37 00:36:20.413 lat (usec): min=745, max=8105, avg=4539.55, stdev=784.63 00:36:20.413 clat percentiles (usec): 00:36:20.413 | 1.00th=[ 2802], 5.00th=[ 3589], 10.00th=[ 3851], 20.00th=[ 4113], 00:36:20.413 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:36:20.413 | 70.00th=[ 4555], 80.00th=[ 4883], 90.00th=[ 5473], 95.00th=[ 6128], 00:36:20.413 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[ 7963], 99.95th=[ 8094], 00:36:20.413 | 99.99th=[ 8094] 00:36:20.413 bw ( KiB/s): min=13120, max=14384, per=23.76%, avg=13971.56, stdev=397.49, samples=9 00:36:20.413 iops : min= 1640, max= 1798, avg=1746.44, stdev=49.69, samples=9 00:36:20.413 lat (usec) : 750=0.01%, 1000=0.03% 00:36:20.413 lat (msec) : 2=0.29%, 4=14.89%, 10=84.77% 00:36:20.413 cpu : usr=95.00%, sys=4.50%, ctx=6, majf=0, minf=35 00:36:20.413 IO depths : 1=0.1%, 2=10.6%, 4=61.5%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.413 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.413 issued rwts: total=8742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.413 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:20.413 filename1: (groupid=0, jobs=1): err= 0: pid=2423487: Thu Dec 5 14:07:51 2024 00:36:20.413 read: IOPS=1801, BW=14.1MiB/s (14.8MB/s)(70.4MiB/5004msec) 00:36:20.413 slat (nsec): min=5783, max=75971, avg=14203.49, stdev=7411.85 00:36:20.413 clat (usec): min=1112, max=7954, avg=4391.51, stdev=698.89 00:36:20.413 lat (usec): min=1125, max=7969, avg=4405.71, stdev=698.52 00:36:20.413 clat percentiles (usec): 00:36:20.413 | 1.00th=[ 2671], 5.00th=[ 3523], 10.00th=[ 3720], 20.00th=[ 3982], 00:36:20.413 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:36:20.413 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 5276], 95.00th=[ 5669], 00:36:20.413 | 99.00th=[ 6783], 99.50th=[ 7046], 99.90th=[ 7767], 99.95th=[ 7898], 00:36:20.413 | 99.99th=[ 7963] 00:36:20.413 bw ( KiB/s): min=14016, max=15088, per=24.52%, avg=14417.60, stdev=347.15, samples=10 00:36:20.413 iops : min= 1752, max= 1886, avg=1802.20, stdev=43.39, samples=10 00:36:20.413 lat (msec) : 2=0.50%, 4=20.63%, 10=78.87% 00:36:20.413 cpu : usr=94.86%, sys=4.62%, ctx=6, majf=0, minf=18 00:36:20.413 IO depths : 1=0.3%, 2=11.0%, 4=60.8%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.413 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.413 issued rwts: total=9016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.413 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:20.413 00:36:20.413 Run status group 0 (all jobs): 00:36:20.413 READ: bw=57.4MiB/s (60.2MB/s), 13.7MiB/s-15.3MiB/s (14.3MB/s-16.1MB/s), io=287MiB (301MB), run=5001-5004msec 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.672 00:36:20.672 real 0m24.707s 00:36:20.672 user 4m36.234s 00:36:20.672 sys 0m6.059s 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:20.672 14:07:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.672 ************************************ 00:36:20.672 END TEST fio_dif_rand_params 00:36:20.672 ************************************ 00:36:20.931 14:07:52 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:20.931 14:07:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:20.931 14:07:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:20.931 14:07:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:20.931 ************************************ 00:36:20.931 START TEST fio_dif_digest 00:36:20.931 ************************************ 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:20.931 bdev_null0 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:20.931 [2024-12-05 14:07:52.264226] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:20.931 { 00:36:20.931 "params": { 00:36:20.931 "name": "Nvme$subsystem", 00:36:20.931 "trtype": "$TEST_TRANSPORT", 00:36:20.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:20.931 "adrfam": "ipv4", 00:36:20.931 "trsvcid": "$NVMF_PORT", 00:36:20.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:20.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:20.931 "hdgst": ${hdgst:-false}, 00:36:20.931 "ddgst": ${ddgst:-false} 00:36:20.931 }, 00:36:20.931 "method": "bdev_nvme_attach_controller" 00:36:20.931 } 00:36:20.931 EOF 00:36:20.931 )") 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:20.931 "params": { 00:36:20.931 "name": "Nvme0", 00:36:20.931 "trtype": "tcp", 00:36:20.931 "traddr": "10.0.0.2", 00:36:20.931 "adrfam": "ipv4", 00:36:20.931 "trsvcid": "4420", 00:36:20.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:20.931 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:20.931 "hdgst": true, 00:36:20.931 "ddgst": true 00:36:20.931 }, 00:36:20.931 "method": "bdev_nvme_attach_controller" 00:36:20.931 }' 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:20.931 14:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:21.190 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:21.190 ... 00:36:21.190 fio-3.35 00:36:21.190 Starting 3 threads 00:36:33.431 00:36:33.431 filename0: (groupid=0, jobs=1): err= 0: pid=2424248: Thu Dec 5 14:08:03 2024 00:36:33.431 read: IOPS=195, BW=24.5MiB/s (25.6MB/s)(246MiB/10048msec) 00:36:33.431 slat (nsec): min=5705, max=48851, avg=18933.47, stdev=5147.48 00:36:33.431 clat (usec): min=11883, max=52045, avg=15289.25, stdev=1498.35 00:36:33.431 lat (usec): min=11902, max=52064, avg=15308.19, stdev=1498.35 00:36:33.431 clat percentiles (usec): 00:36:33.431 | 1.00th=[13173], 5.00th=[13698], 10.00th=[14091], 20.00th=[14484], 00:36:33.431 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15401], 00:36:33.431 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16450], 95.00th=[16909], 00:36:33.431 | 99.00th=[17695], 99.50th=[18482], 99.90th=[49546], 99.95th=[52167], 00:36:33.431 | 99.99th=[52167] 00:36:33.431 bw ( KiB/s): min=24576, max=25856, per=32.89%, avg=25128.85, stdev=388.52, samples=20 00:36:33.431 iops : min= 192, max= 202, avg=196.30, stdev= 3.06, samples=20 00:36:33.431 lat (msec) : 20=99.80%, 50=0.15%, 100=0.05% 00:36:33.431 cpu : usr=95.04%, sys=4.43%, ctx=22, majf=0, minf=110 00:36:33.431 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:33.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.431 issued rwts: total=1966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.431 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:33.431 filename0: (groupid=0, jobs=1): err= 0: pid=2424249: Thu Dec 5 14:08:03 2024 00:36:33.431 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(268MiB/10048msec) 00:36:33.431 slat (nsec): min=5835, max=53628, avg=16022.86, stdev=4944.82 00:36:33.431 clat (usec): min=10182, max=52165, avg=14007.29, stdev=1442.59 00:36:33.431 lat (usec): min=10195, max=52179, avg=14023.31, stdev=1442.53 00:36:33.431 clat percentiles (usec): 00:36:33.431 | 1.00th=[11731], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:36:33.431 | 30.00th=[13566], 40.00th=[13829], 50.00th=[13960], 60.00th=[14222], 00:36:33.431 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15139], 95.00th=[15401], 00:36:33.431 | 99.00th=[16057], 99.50th=[16319], 99.90th=[20841], 99.95th=[47973], 00:36:33.431 | 99.99th=[52167] 00:36:33.431 bw ( KiB/s): min=26368, max=28160, per=35.90%, avg=27430.40, stdev=471.86, samples=20 00:36:33.432 iops : min= 206, max= 220, avg=214.30, stdev= 3.69, samples=20 00:36:33.432 lat (msec) : 20=99.86%, 50=0.09%, 100=0.05% 00:36:33.432 cpu : usr=93.70%, sys=5.74%, ctx=23, majf=0, minf=139 00:36:33.432 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:33.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.432 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.432 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:33.432 filename0: (groupid=0, jobs=1): err= 0: pid=2424250: Thu Dec 5 14:08:03 2024 00:36:33.432 read: IOPS=187, BW=23.5MiB/s (24.6MB/s)(236MiB/10047msec) 00:36:33.432 slat (nsec): min=5844, max=84753, avg=16476.95, stdev=4847.58 00:36:33.432 clat (usec): min=12533, max=50228, avg=15937.37, stdev=1494.00 00:36:33.432 lat (usec): min=12546, max=50247, avg=15953.85, stdev=1494.14 00:36:33.432 clat percentiles (usec): 00:36:33.432 | 1.00th=[13435], 5.00th=[14222], 10.00th=[14615], 20.00th=[15139], 00:36:33.432 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15926], 60.00th=[16057], 00:36:33.432 | 70.00th=[16450], 80.00th=[16712], 90.00th=[17171], 95.00th=[17695], 00:36:33.432 | 99.00th=[18482], 99.50th=[19006], 99.90th=[46400], 99.95th=[50070], 00:36:33.432 | 99.99th=[50070] 00:36:33.432 bw ( KiB/s): min=23296, max=24576, per=31.56%, avg=24115.20, stdev=367.71, samples=20 00:36:33.432 iops : min= 182, max= 192, avg=188.40, stdev= 2.87, samples=20 00:36:33.432 lat (msec) : 20=99.68%, 50=0.27%, 100=0.05% 00:36:33.432 cpu : usr=94.57%, sys=4.93%, ctx=17, majf=0, minf=216 00:36:33.432 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:33.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.432 issued rwts: total=1886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.432 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:33.432 00:36:33.432 Run status group 0 (all jobs): 00:36:33.432 READ: bw=74.6MiB/s (78.2MB/s), 23.5MiB/s-26.7MiB/s (24.6MB/s-28.0MB/s), io=750MiB (786MB), run=10047-10048msec 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.432 00:36:33.432 real 0m11.321s 00:36:33.432 user 0m29.779s 00:36:33.432 sys 0m1.797s 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:33.432 14:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:33.432 ************************************ 00:36:33.432 END TEST fio_dif_digest 00:36:33.432 ************************************ 00:36:33.432 14:08:03 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:33.432 14:08:03 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:33.432 14:08:03 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:33.432 14:08:03 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:33.432 14:08:03 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:33.432 14:08:03 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:33.432 14:08:03 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:33.432 14:08:03 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:33.432 rmmod nvme_tcp 00:36:33.432 rmmod nvme_fabrics 00:36:33.432 rmmod nvme_keyring 00:36:33.432 14:08:03 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:33.432 14:08:03 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:33.432 14:08:03 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:33.432 14:08:03 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2418071 ']' 00:36:33.432 14:08:03 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2418071 00:36:33.432 14:08:03 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2418071 ']' 00:36:33.432 14:08:03 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2418071 00:36:33.432 14:08:03 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:36:33.432 14:08:03 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:33.432 14:08:03 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2418071 00:36:33.432 14:08:03 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:33.432 14:08:03 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:33.432 14:08:03 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2418071' 00:36:33.432 killing process with pid 2418071 00:36:33.432 14:08:03 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2418071 00:36:33.432 14:08:03 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2418071 00:36:33.432 14:08:03 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:33.432 14:08:03 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:33.693 Waiting for block devices as requested 00:36:33.693 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:33.693 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:33.951 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:33.951 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:33.951 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:33.951 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:34.207 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:34.207 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:34.207 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:36:34.465 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:34.465 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:34.465 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:34.465 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:34.723 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:34.723 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:34.723 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:34.723 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:34.983 14:08:06 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:34.983 14:08:06 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:34.983 14:08:06 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:34.983 14:08:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:36:34.983 14:08:06 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:34.983 14:08:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:36:34.983 14:08:06 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:34.983 14:08:06 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:34.983 14:08:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:34.983 14:08:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:34.983 14:08:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:36.885 14:08:08 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:37.145 00:36:37.145 real 1m7.777s 00:36:37.145 user 6m35.105s 00:36:37.145 sys 0m16.495s 00:36:37.145 14:08:08 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:37.145 14:08:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:37.145 ************************************ 00:36:37.145 END TEST nvmf_dif 00:36:37.145 ************************************ 00:36:37.145 14:08:08 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:37.145 14:08:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:37.145 14:08:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:37.145 14:08:08 -- common/autotest_common.sh@10 -- # set +x 00:36:37.145 ************************************ 00:36:37.145 START TEST nvmf_abort_qd_sizes 00:36:37.145 ************************************ 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:37.145 * Looking for test storage... 00:36:37.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:37.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.145 --rc genhtml_branch_coverage=1 00:36:37.145 --rc genhtml_function_coverage=1 00:36:37.145 --rc genhtml_legend=1 00:36:37.145 --rc geninfo_all_blocks=1 00:36:37.145 --rc geninfo_unexecuted_blocks=1 00:36:37.145 00:36:37.145 ' 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:37.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.145 --rc genhtml_branch_coverage=1 00:36:37.145 --rc genhtml_function_coverage=1 00:36:37.145 --rc genhtml_legend=1 00:36:37.145 --rc geninfo_all_blocks=1 00:36:37.145 --rc geninfo_unexecuted_blocks=1 00:36:37.145 00:36:37.145 ' 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:37.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.145 --rc genhtml_branch_coverage=1 00:36:37.145 --rc genhtml_function_coverage=1 00:36:37.145 --rc genhtml_legend=1 00:36:37.145 --rc geninfo_all_blocks=1 00:36:37.145 --rc geninfo_unexecuted_blocks=1 00:36:37.145 00:36:37.145 ' 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:37.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.145 --rc genhtml_branch_coverage=1 00:36:37.145 --rc genhtml_function_coverage=1 00:36:37.145 --rc genhtml_legend=1 00:36:37.145 --rc geninfo_all_blocks=1 00:36:37.145 --rc geninfo_unexecuted_blocks=1 00:36:37.145 00:36:37.145 ' 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:37.145 14:08:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:37.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:37.146 14:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:39.790 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:39.790 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:39.790 Found net devices under 0000:09:00.0: cvl_0_0 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:39.790 Found net devices under 0000:09:00.1: cvl_0_1 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:39.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:39.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:36:39.790 00:36:39.790 --- 10.0.0.2 ping statistics --- 00:36:39.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.790 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:39.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:39.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:36:39.790 00:36:39.790 --- 10.0.0.1 ping statistics --- 00:36:39.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.790 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:36:39.790 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:39.791 14:08:10 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:40.727 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:40.727 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:40.727 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:40.727 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:40.727 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:40.727 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:40.727 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:40.727 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:40.727 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:40.727 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:40.727 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:40.727 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:40.727 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:40.727 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:40.727 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:40.727 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:41.663 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:36:41.920 14:08:13 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2429173 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2429173 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2429173 ']' 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:41.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:41.921 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:41.921 [2024-12-05 14:08:13.263538] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:36:41.921 [2024-12-05 14:08:13.263610] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:41.921 [2024-12-05 14:08:13.334361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:41.921 [2024-12-05 14:08:13.391342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:41.921 [2024-12-05 14:08:13.391393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:41.921 [2024-12-05 14:08:13.391428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:41.921 [2024-12-05 14:08:13.391440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:41.921 [2024-12-05 14:08:13.391449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:41.921 [2024-12-05 14:08:13.392939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.921 [2024-12-05 14:08:13.393004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:41.921 [2024-12-05 14:08:13.393070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:41.921 [2024-12-05 14:08:13.393073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:0b:00.0 ]] 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:0b:00.0 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:42.180 14:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:42.180 ************************************ 00:36:42.180 START TEST spdk_target_abort 00:36:42.180 ************************************ 00:36:42.180 14:08:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:36:42.180 14:08:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:42.180 14:08:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:36:42.180 14:08:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.180 14:08:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:45.462 spdk_targetn1 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:45.462 [2024-12-05 14:08:16.396955] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:45.462 [2024-12-05 14:08:16.437237] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.462 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:45.463 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:45.463 14:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:48.742 Initializing NVMe Controllers 00:36:48.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:48.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:48.742 Initialization complete. Launching workers. 00:36:48.742 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10827, failed: 0 00:36:48.742 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1207, failed to submit 9620 00:36:48.742 success 746, unsuccessful 461, failed 0 00:36:48.742 14:08:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:48.742 14:08:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.026 Initializing NVMe Controllers 00:36:52.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:52.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:52.026 Initialization complete. Launching workers. 00:36:52.026 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9143, failed: 0 00:36:52.026 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 7896 00:36:52.026 success 251, unsuccessful 996, failed 0 00:36:52.026 14:08:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:52.026 14:08:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:55.301 Initializing NVMe Controllers 00:36:55.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:55.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:55.301 Initialization complete. Launching workers. 00:36:55.301 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31667, failed: 0 00:36:55.301 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2715, failed to submit 28952 00:36:55.301 success 510, unsuccessful 2205, failed 0 00:36:55.301 14:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:55.301 14:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.301 14:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:55.302 14:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.302 14:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:55.302 14:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.302 14:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:56.235 14:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.235 14:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2429173 00:36:56.235 14:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2429173 ']' 00:36:56.235 14:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2429173 00:36:56.235 14:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:36:56.235 14:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:56.235 14:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2429173 00:36:56.235 14:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:56.235 14:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:56.235 14:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2429173' 00:36:56.235 killing process with pid 2429173 00:36:56.235 14:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2429173 00:36:56.235 14:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2429173 00:36:56.494 00:36:56.494 real 0m14.257s 00:36:56.494 user 0m53.480s 00:36:56.494 sys 0m2.878s 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:56.494 ************************************ 00:36:56.494 END TEST spdk_target_abort 00:36:56.494 ************************************ 00:36:56.494 14:08:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:56.494 14:08:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:56.494 14:08:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:56.494 14:08:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:56.494 ************************************ 00:36:56.494 START TEST kernel_target_abort 00:36:56.494 ************************************ 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:56.494 14:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:57.430 Waiting for block devices as requested 00:36:57.689 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:57.689 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:57.689 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:57.949 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:57.949 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:57.949 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:57.949 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:58.208 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:58.208 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:36:58.468 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:58.468 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:58.468 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:58.468 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:58.468 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:58.727 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:58.727 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:58.727 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:58.985 No valid GPT data, bailing 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:58.985 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:36:58.985 00:36:58.985 Discovery Log Number of Records 2, Generation counter 2 00:36:58.985 =====Discovery Log Entry 0====== 00:36:58.985 trtype: tcp 00:36:58.985 adrfam: ipv4 00:36:58.985 subtype: current discovery subsystem 00:36:58.985 treq: not specified, sq flow control disable supported 00:36:58.985 portid: 1 00:36:58.985 trsvcid: 4420 00:36:58.985 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:58.985 traddr: 10.0.0.1 00:36:58.985 eflags: none 00:36:58.985 sectype: none 00:36:58.985 =====Discovery Log Entry 1====== 00:36:58.985 trtype: tcp 00:36:58.985 adrfam: ipv4 00:36:58.985 subtype: nvme subsystem 00:36:58.985 treq: not specified, sq flow control disable supported 00:36:58.985 portid: 1 00:36:58.985 trsvcid: 4420 00:36:58.985 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:58.985 traddr: 10.0.0.1 00:36:58.985 eflags: none 00:36:58.985 sectype: none 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:58.986 14:08:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:02.276 Initializing NVMe Controllers 00:37:02.276 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:02.276 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:02.276 Initialization complete. Launching workers. 00:37:02.276 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57976, failed: 0 00:37:02.276 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 57976, failed to submit 0 00:37:02.276 success 0, unsuccessful 57976, failed 0 00:37:02.276 14:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:02.276 14:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:05.603 Initializing NVMe Controllers 00:37:05.603 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:05.603 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:05.603 Initialization complete. Launching workers. 00:37:05.603 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102637, failed: 0 00:37:05.603 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25826, failed to submit 76811 00:37:05.603 success 0, unsuccessful 25826, failed 0 00:37:05.603 14:08:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:05.603 14:08:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:08.888 Initializing NVMe Controllers 00:37:08.888 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:08.888 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:08.888 Initialization complete. Launching workers. 00:37:08.888 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101131, failed: 0 00:37:08.888 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25294, failed to submit 75837 00:37:08.888 success 0, unsuccessful 25294, failed 0 00:37:08.888 14:08:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:08.888 14:08:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:08.888 14:08:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:08.888 14:08:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:08.888 14:08:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:08.888 14:08:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:08.888 14:08:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:08.888 14:08:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:08.888 14:08:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:08.888 14:08:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:09.826 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:09.826 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:09.826 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:09.826 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:09.826 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:09.826 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:09.826 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:09.826 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:09.826 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:09.826 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:09.826 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:09.826 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:09.826 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:09.826 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:09.826 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:09.826 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:10.763 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:37:11.021 00:37:11.021 real 0m14.430s 00:37:11.021 user 0m6.765s 00:37:11.021 sys 0m3.238s 00:37:11.021 14:08:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:11.021 14:08:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:11.021 ************************************ 00:37:11.021 END TEST kernel_target_abort 00:37:11.021 ************************************ 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:11.021 rmmod nvme_tcp 00:37:11.021 rmmod nvme_fabrics 00:37:11.021 rmmod nvme_keyring 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2429173 ']' 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2429173 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2429173 ']' 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2429173 00:37:11.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2429173) - No such process 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2429173 is not found' 00:37:11.021 Process with pid 2429173 is not found 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:11.021 14:08:42 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:12.399 Waiting for block devices as requested 00:37:12.399 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:12.399 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:12.399 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:12.399 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:12.399 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:12.659 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:12.659 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:12.659 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:12.919 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:37:12.919 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:12.919 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:13.179 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:13.179 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:13.179 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:13.179 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:13.438 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:13.438 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:13.438 14:08:44 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:13.438 14:08:44 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:13.438 14:08:44 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:13.438 14:08:44 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:37:13.438 14:08:44 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:13.438 14:08:44 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:37:13.438 14:08:44 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:13.438 14:08:44 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:13.438 14:08:44 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:13.438 14:08:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:13.438 14:08:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:15.984 14:08:46 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:15.984 00:37:15.984 real 0m38.527s 00:37:15.984 user 1m2.550s 00:37:15.984 sys 0m9.746s 00:37:15.984 14:08:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:15.984 14:08:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:15.984 ************************************ 00:37:15.984 END TEST nvmf_abort_qd_sizes 00:37:15.984 ************************************ 00:37:15.984 14:08:47 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:15.984 14:08:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:15.984 14:08:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:15.984 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:37:15.984 ************************************ 00:37:15.984 START TEST keyring_file 00:37:15.984 ************************************ 00:37:15.984 14:08:47 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:15.984 * Looking for test storage... 00:37:15.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:15.984 14:08:47 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:15.984 14:08:47 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:37:15.984 14:08:47 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:15.984 14:08:47 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:15.984 14:08:47 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:15.984 14:08:47 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:15.984 14:08:47 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:15.984 14:08:47 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:15.984 14:08:47 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:15.984 14:08:47 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:15.985 14:08:47 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:15.985 14:08:47 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:15.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.985 --rc genhtml_branch_coverage=1 00:37:15.985 --rc genhtml_function_coverage=1 00:37:15.985 --rc genhtml_legend=1 00:37:15.985 --rc geninfo_all_blocks=1 00:37:15.985 --rc geninfo_unexecuted_blocks=1 00:37:15.985 00:37:15.985 ' 00:37:15.985 14:08:47 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:15.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.985 --rc genhtml_branch_coverage=1 00:37:15.985 --rc genhtml_function_coverage=1 00:37:15.985 --rc genhtml_legend=1 00:37:15.985 --rc geninfo_all_blocks=1 00:37:15.985 --rc geninfo_unexecuted_blocks=1 00:37:15.985 00:37:15.985 ' 00:37:15.985 14:08:47 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:15.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.985 --rc genhtml_branch_coverage=1 00:37:15.985 --rc genhtml_function_coverage=1 00:37:15.985 --rc genhtml_legend=1 00:37:15.985 --rc geninfo_all_blocks=1 00:37:15.985 --rc geninfo_unexecuted_blocks=1 00:37:15.985 00:37:15.985 ' 00:37:15.985 14:08:47 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:15.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.985 --rc genhtml_branch_coverage=1 00:37:15.985 --rc genhtml_function_coverage=1 00:37:15.985 --rc genhtml_legend=1 00:37:15.985 --rc geninfo_all_blocks=1 00:37:15.985 --rc geninfo_unexecuted_blocks=1 00:37:15.985 00:37:15.985 ' 00:37:15.985 14:08:47 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:15.985 14:08:47 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:15.985 14:08:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.985 14:08:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.985 14:08:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.985 14:08:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:15.985 14:08:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:15.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:15.985 14:08:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:15.985 14:08:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:15.985 14:08:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:15.985 14:08:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:15.985 14:08:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:15.985 14:08:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.71jEhLLr92 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.71jEhLLr92 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.71jEhLLr92 00:37:15.985 14:08:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.71jEhLLr92 00:37:15.985 14:08:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fq2Dfc0kLu 00:37:15.985 14:08:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:15.985 14:08:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:15.986 14:08:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fq2Dfc0kLu 00:37:15.986 14:08:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fq2Dfc0kLu 00:37:15.986 14:08:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.fq2Dfc0kLu 00:37:15.986 14:08:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=2434941 00:37:15.986 14:08:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:15.986 14:08:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2434941 00:37:15.986 14:08:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2434941 ']' 00:37:15.986 14:08:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:15.986 14:08:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:15.986 14:08:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:15.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:15.986 14:08:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:15.986 14:08:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:15.986 [2024-12-05 14:08:47.330852] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:37:15.986 [2024-12-05 14:08:47.330944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434941 ] 00:37:15.986 [2024-12-05 14:08:47.396499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.986 [2024-12-05 14:08:47.450790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.246 14:08:47 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:16.246 14:08:47 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:16.246 14:08:47 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:16.246 14:08:47 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.246 14:08:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.246 [2024-12-05 14:08:47.721521] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:16.246 null0 00:37:16.246 [2024-12-05 14:08:47.753566] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:16.246 [2024-12-05 14:08:47.754056] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.505 14:08:47 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.505 [2024-12-05 14:08:47.777605] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:16.505 request: 00:37:16.505 { 00:37:16.505 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.505 "secure_channel": false, 00:37:16.505 "listen_address": { 00:37:16.505 "trtype": "tcp", 00:37:16.505 "traddr": "127.0.0.1", 00:37:16.505 "trsvcid": "4420" 00:37:16.505 }, 00:37:16.505 "method": "nvmf_subsystem_add_listener", 00:37:16.505 "req_id": 1 00:37:16.505 } 00:37:16.505 Got JSON-RPC error response 00:37:16.505 response: 00:37:16.505 { 00:37:16.505 "code": -32602, 00:37:16.505 "message": "Invalid parameters" 00:37:16.505 } 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:16.505 14:08:47 keyring_file -- keyring/file.sh@47 -- # bperfpid=2434955 00:37:16.505 14:08:47 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:16.505 14:08:47 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2434955 /var/tmp/bperf.sock 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2434955 ']' 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:16.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:16.505 14:08:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.505 [2024-12-05 14:08:47.825907] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:37:16.505 [2024-12-05 14:08:47.825967] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434955 ] 00:37:16.505 [2024-12-05 14:08:47.890810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.505 [2024-12-05 14:08:47.947457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:16.763 14:08:48 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:16.763 14:08:48 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:16.763 14:08:48 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.71jEhLLr92 00:37:16.763 14:08:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.71jEhLLr92 00:37:17.022 14:08:48 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.fq2Dfc0kLu 00:37:17.022 14:08:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.fq2Dfc0kLu 00:37:17.280 14:08:48 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:17.280 14:08:48 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:17.280 14:08:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.280 14:08:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.280 14:08:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:17.561 14:08:48 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.71jEhLLr92 == \/\t\m\p\/\t\m\p\.\7\1\j\E\h\L\L\r\9\2 ]] 00:37:17.561 14:08:48 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:17.561 14:08:48 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:17.561 14:08:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.561 14:08:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:17.561 14:08:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.819 14:08:49 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.fq2Dfc0kLu == \/\t\m\p\/\t\m\p\.\f\q\2\D\f\c\0\k\L\u ]] 00:37:17.819 14:08:49 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:17.819 14:08:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:17.819 14:08:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:17.819 14:08:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.819 14:08:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.819 14:08:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:18.077 14:08:49 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:18.077 14:08:49 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:18.077 14:08:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:18.077 14:08:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:18.077 14:08:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.077 14:08:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.077 14:08:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:18.336 14:08:49 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:18.336 14:08:49 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.336 14:08:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.594 [2024-12-05 14:08:49.966044] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:18.594 nvme0n1 00:37:18.594 14:08:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:18.594 14:08:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:18.594 14:08:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:18.594 14:08:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.594 14:08:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.594 14:08:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:18.852 14:08:50 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:18.852 14:08:50 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:18.852 14:08:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:18.852 14:08:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:18.852 14:08:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.852 14:08:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.852 14:08:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:19.110 14:08:50 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:19.110 14:08:50 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:19.370 Running I/O for 1 seconds... 00:37:20.311 10079.00 IOPS, 39.37 MiB/s 00:37:20.311 Latency(us) 00:37:20.311 [2024-12-05T13:08:51.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.311 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:20.311 nvme0n1 : 1.01 10139.53 39.61 0.00 0.00 12589.28 4563.25 20097.71 00:37:20.311 [2024-12-05T13:08:51.837Z] =================================================================================================================== 00:37:20.311 [2024-12-05T13:08:51.837Z] Total : 10139.53 39.61 0.00 0.00 12589.28 4563.25 20097.71 00:37:20.311 { 00:37:20.311 "results": [ 00:37:20.311 { 00:37:20.311 "job": "nvme0n1", 00:37:20.311 "core_mask": "0x2", 00:37:20.311 "workload": "randrw", 00:37:20.311 "percentage": 50, 00:37:20.311 "status": "finished", 00:37:20.311 "queue_depth": 128, 00:37:20.311 "io_size": 4096, 00:37:20.311 "runtime": 1.006753, 00:37:20.311 "iops": 10139.527768976104, 00:37:20.311 "mibps": 39.60753034756291, 00:37:20.311 "io_failed": 0, 00:37:20.311 "io_timeout": 0, 00:37:20.311 "avg_latency_us": 12589.281653314756, 00:37:20.311 "min_latency_us": 4563.247407407407, 00:37:20.311 "max_latency_us": 20097.706666666665 00:37:20.311 } 00:37:20.311 ], 00:37:20.311 "core_count": 1 00:37:20.311 } 00:37:20.311 14:08:51 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:20.311 14:08:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:20.569 14:08:52 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:20.569 14:08:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:20.569 14:08:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:20.569 14:08:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.569 14:08:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.569 14:08:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:20.827 14:08:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:20.827 14:08:52 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:20.827 14:08:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:20.827 14:08:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:20.827 14:08:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.827 14:08:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:20.827 14:08:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.085 14:08:52 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:21.085 14:08:52 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:21.085 14:08:52 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:21.085 14:08:52 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:21.085 14:08:52 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:21.085 14:08:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:21.085 14:08:52 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:21.085 14:08:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:21.085 14:08:52 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:21.086 14:08:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:21.344 [2024-12-05 14:08:52.862180] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:21.344 [2024-12-05 14:08:52.862879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80f530 (107): Transport endpoint is not connected 00:37:21.344 [2024-12-05 14:08:52.863870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80f530 (9): Bad file descriptor 00:37:21.344 [2024-12-05 14:08:52.864870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:21.344 [2024-12-05 14:08:52.864889] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:21.344 [2024-12-05 14:08:52.864917] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:21.344 [2024-12-05 14:08:52.864931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:21.344 request: 00:37:21.344 { 00:37:21.344 "name": "nvme0", 00:37:21.344 "trtype": "tcp", 00:37:21.344 "traddr": "127.0.0.1", 00:37:21.344 "adrfam": "ipv4", 00:37:21.344 "trsvcid": "4420", 00:37:21.344 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:21.344 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:21.344 "prchk_reftag": false, 00:37:21.344 "prchk_guard": false, 00:37:21.344 "hdgst": false, 00:37:21.344 "ddgst": false, 00:37:21.344 "psk": "key1", 00:37:21.344 "allow_unrecognized_csi": false, 00:37:21.344 "method": "bdev_nvme_attach_controller", 00:37:21.344 "req_id": 1 00:37:21.344 } 00:37:21.344 Got JSON-RPC error response 00:37:21.344 response: 00:37:21.344 { 00:37:21.344 "code": -5, 00:37:21.344 "message": "Input/output error" 00:37:21.344 } 00:37:21.602 14:08:52 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:21.602 14:08:52 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:21.602 14:08:52 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:21.602 14:08:52 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:21.602 14:08:52 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:21.602 14:08:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:21.602 14:08:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:21.602 14:08:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:21.602 14:08:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.602 14:08:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:21.860 14:08:53 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:21.860 14:08:53 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:21.860 14:08:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:21.860 14:08:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:21.860 14:08:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:21.860 14:08:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.860 14:08:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:22.119 14:08:53 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:22.119 14:08:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:22.119 14:08:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:22.377 14:08:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:22.377 14:08:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:22.635 14:08:53 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:22.635 14:08:53 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:22.635 14:08:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.893 14:08:54 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:22.893 14:08:54 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.71jEhLLr92 00:37:22.893 14:08:54 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.71jEhLLr92 00:37:22.893 14:08:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:22.893 14:08:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.71jEhLLr92 00:37:22.893 14:08:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:22.893 14:08:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:22.893 14:08:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:22.893 14:08:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:22.893 14:08:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.71jEhLLr92 00:37:22.893 14:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.71jEhLLr92 00:37:23.151 [2024-12-05 14:08:54.508969] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.71jEhLLr92': 0100660 00:37:23.151 [2024-12-05 14:08:54.509001] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:23.151 request: 00:37:23.151 { 00:37:23.151 "name": "key0", 00:37:23.151 "path": "/tmp/tmp.71jEhLLr92", 00:37:23.151 "method": "keyring_file_add_key", 00:37:23.151 "req_id": 1 00:37:23.151 } 00:37:23.151 Got JSON-RPC error response 00:37:23.151 response: 00:37:23.151 { 00:37:23.151 "code": -1, 00:37:23.151 "message": "Operation not permitted" 00:37:23.151 } 00:37:23.151 14:08:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:23.151 14:08:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:23.151 14:08:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:23.151 14:08:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:23.151 14:08:54 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.71jEhLLr92 00:37:23.151 14:08:54 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.71jEhLLr92 00:37:23.151 14:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.71jEhLLr92 00:37:23.409 14:08:54 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.71jEhLLr92 00:37:23.409 14:08:54 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:23.409 14:08:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:23.409 14:08:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.409 14:08:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.409 14:08:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:23.409 14:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.670 14:08:55 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:23.670 14:08:55 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.671 14:08:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:23.671 14:08:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.671 14:08:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:23.671 14:08:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.671 14:08:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:23.671 14:08:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.671 14:08:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.671 14:08:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.931 [2024-12-05 14:08:55.347239] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.71jEhLLr92': No such file or directory 00:37:23.931 [2024-12-05 14:08:55.347270] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:23.931 [2024-12-05 14:08:55.347308] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:23.931 [2024-12-05 14:08:55.347321] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:23.931 [2024-12-05 14:08:55.347334] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:23.931 [2024-12-05 14:08:55.347346] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:23.931 request: 00:37:23.931 { 00:37:23.931 "name": "nvme0", 00:37:23.931 "trtype": "tcp", 00:37:23.931 "traddr": "127.0.0.1", 00:37:23.931 "adrfam": "ipv4", 00:37:23.931 "trsvcid": "4420", 00:37:23.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:23.931 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:23.931 "prchk_reftag": false, 00:37:23.931 "prchk_guard": false, 00:37:23.931 "hdgst": false, 00:37:23.931 "ddgst": false, 00:37:23.931 "psk": "key0", 00:37:23.931 "allow_unrecognized_csi": false, 00:37:23.931 "method": "bdev_nvme_attach_controller", 00:37:23.931 "req_id": 1 00:37:23.931 } 00:37:23.931 Got JSON-RPC error response 00:37:23.931 response: 00:37:23.931 { 00:37:23.931 "code": -19, 00:37:23.931 "message": "No such device" 00:37:23.931 } 00:37:23.931 14:08:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:23.931 14:08:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:23.931 14:08:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:23.931 14:08:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:23.931 14:08:55 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:23.931 14:08:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:24.189 14:08:55 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:24.189 14:08:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:24.189 14:08:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:24.189 14:08:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:24.189 14:08:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:24.189 14:08:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:24.189 14:08:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uRFPcxikeU 00:37:24.189 14:08:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:24.189 14:08:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:24.189 14:08:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:24.189 14:08:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:24.189 14:08:55 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:24.189 14:08:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:24.189 14:08:55 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:24.189 14:08:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uRFPcxikeU 00:37:24.189 14:08:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uRFPcxikeU 00:37:24.189 14:08:55 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.uRFPcxikeU 00:37:24.189 14:08:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uRFPcxikeU 00:37:24.189 14:08:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uRFPcxikeU 00:37:24.447 14:08:55 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:24.447 14:08:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:25.015 nvme0n1 00:37:25.015 14:08:56 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:25.015 14:08:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:25.015 14:08:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:25.015 14:08:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:25.015 14:08:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:25.015 14:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.272 14:08:56 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:25.272 14:08:56 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:25.272 14:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:25.530 14:08:56 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:25.530 14:08:56 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:25.530 14:08:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:25.530 14:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.530 14:08:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:25.787 14:08:57 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:25.787 14:08:57 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:25.787 14:08:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:25.787 14:08:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:25.787 14:08:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:25.787 14:08:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.787 14:08:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:26.046 14:08:57 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:26.046 14:08:57 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:26.046 14:08:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:26.305 14:08:57 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:26.305 14:08:57 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:26.305 14:08:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.562 14:08:57 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:26.562 14:08:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uRFPcxikeU 00:37:26.562 14:08:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uRFPcxikeU 00:37:26.820 14:08:58 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.fq2Dfc0kLu 00:37:26.820 14:08:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.fq2Dfc0kLu 00:37:27.078 14:08:58 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:27.078 14:08:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:27.336 nvme0n1 00:37:27.336 14:08:58 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:27.336 14:08:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:27.904 14:08:59 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:27.904 "subsystems": [ 00:37:27.904 { 00:37:27.904 "subsystem": "keyring", 00:37:27.904 "config": [ 00:37:27.904 { 00:37:27.904 "method": "keyring_file_add_key", 00:37:27.904 "params": { 00:37:27.904 "name": "key0", 00:37:27.904 "path": "/tmp/tmp.uRFPcxikeU" 00:37:27.904 } 00:37:27.904 }, 00:37:27.904 { 00:37:27.904 "method": "keyring_file_add_key", 00:37:27.904 "params": { 00:37:27.904 "name": "key1", 00:37:27.904 "path": "/tmp/tmp.fq2Dfc0kLu" 00:37:27.904 } 00:37:27.904 } 00:37:27.904 ] 00:37:27.904 }, 00:37:27.904 { 00:37:27.904 "subsystem": "iobuf", 00:37:27.904 "config": [ 00:37:27.904 { 00:37:27.904 "method": "iobuf_set_options", 00:37:27.904 "params": { 00:37:27.904 "small_pool_count": 8192, 00:37:27.904 "large_pool_count": 1024, 00:37:27.904 "small_bufsize": 8192, 00:37:27.904 "large_bufsize": 135168, 00:37:27.904 "enable_numa": false 00:37:27.904 } 00:37:27.904 } 00:37:27.904 ] 00:37:27.904 }, 00:37:27.904 { 00:37:27.904 "subsystem": "sock", 00:37:27.904 "config": [ 00:37:27.904 { 00:37:27.904 "method": "sock_set_default_impl", 00:37:27.904 "params": { 00:37:27.904 "impl_name": "posix" 00:37:27.904 } 00:37:27.904 }, 00:37:27.904 { 00:37:27.904 "method": "sock_impl_set_options", 00:37:27.904 "params": { 00:37:27.904 "impl_name": "ssl", 00:37:27.904 "recv_buf_size": 4096, 00:37:27.904 "send_buf_size": 4096, 00:37:27.904 "enable_recv_pipe": true, 00:37:27.904 "enable_quickack": false, 00:37:27.904 "enable_placement_id": 0, 00:37:27.904 "enable_zerocopy_send_server": true, 00:37:27.904 "enable_zerocopy_send_client": false, 00:37:27.904 "zerocopy_threshold": 0, 00:37:27.904 "tls_version": 0, 00:37:27.904 "enable_ktls": false 00:37:27.904 } 00:37:27.904 }, 00:37:27.904 { 00:37:27.904 "method": "sock_impl_set_options", 00:37:27.904 "params": { 00:37:27.904 "impl_name": "posix", 00:37:27.904 "recv_buf_size": 2097152, 00:37:27.904 "send_buf_size": 2097152, 00:37:27.905 "enable_recv_pipe": true, 00:37:27.905 "enable_quickack": false, 00:37:27.905 "enable_placement_id": 0, 00:37:27.905 "enable_zerocopy_send_server": true, 00:37:27.905 "enable_zerocopy_send_client": false, 00:37:27.905 "zerocopy_threshold": 0, 00:37:27.905 "tls_version": 0, 00:37:27.905 "enable_ktls": false 00:37:27.905 } 00:37:27.905 } 00:37:27.905 ] 00:37:27.905 }, 00:37:27.905 { 00:37:27.905 "subsystem": "vmd", 00:37:27.905 "config": [] 00:37:27.905 }, 00:37:27.905 { 00:37:27.905 "subsystem": "accel", 00:37:27.905 "config": [ 00:37:27.905 { 00:37:27.905 "method": "accel_set_options", 00:37:27.905 "params": { 00:37:27.905 "small_cache_size": 128, 00:37:27.905 "large_cache_size": 16, 00:37:27.905 "task_count": 2048, 00:37:27.905 "sequence_count": 2048, 00:37:27.905 "buf_count": 2048 00:37:27.905 } 00:37:27.905 } 00:37:27.905 ] 00:37:27.905 }, 00:37:27.905 { 00:37:27.905 "subsystem": "bdev", 00:37:27.905 "config": [ 00:37:27.905 { 00:37:27.905 "method": "bdev_set_options", 00:37:27.905 "params": { 00:37:27.905 "bdev_io_pool_size": 65535, 00:37:27.905 "bdev_io_cache_size": 256, 00:37:27.905 "bdev_auto_examine": true, 00:37:27.905 "iobuf_small_cache_size": 128, 00:37:27.905 "iobuf_large_cache_size": 16 00:37:27.905 } 00:37:27.905 }, 00:37:27.905 { 00:37:27.905 "method": "bdev_raid_set_options", 00:37:27.905 "params": { 00:37:27.905 "process_window_size_kb": 1024, 00:37:27.905 "process_max_bandwidth_mb_sec": 0 00:37:27.905 } 00:37:27.905 }, 00:37:27.905 { 00:37:27.905 "method": "bdev_iscsi_set_options", 00:37:27.905 "params": { 00:37:27.905 "timeout_sec": 30 00:37:27.905 } 00:37:27.905 }, 00:37:27.905 { 00:37:27.905 "method": "bdev_nvme_set_options", 00:37:27.905 "params": { 00:37:27.905 "action_on_timeout": "none", 00:37:27.905 "timeout_us": 0, 00:37:27.905 "timeout_admin_us": 0, 00:37:27.905 "keep_alive_timeout_ms": 10000, 00:37:27.905 "arbitration_burst": 0, 00:37:27.905 "low_priority_weight": 0, 00:37:27.905 "medium_priority_weight": 0, 00:37:27.905 "high_priority_weight": 0, 00:37:27.905 "nvme_adminq_poll_period_us": 10000, 00:37:27.905 "nvme_ioq_poll_period_us": 0, 00:37:27.905 "io_queue_requests": 512, 00:37:27.905 "delay_cmd_submit": true, 00:37:27.905 "transport_retry_count": 4, 00:37:27.905 "bdev_retry_count": 3, 00:37:27.905 "transport_ack_timeout": 0, 00:37:27.905 "ctrlr_loss_timeout_sec": 0, 00:37:27.905 "reconnect_delay_sec": 0, 00:37:27.905 "fast_io_fail_timeout_sec": 0, 00:37:27.905 "disable_auto_failback": false, 00:37:27.905 "generate_uuids": false, 00:37:27.905 "transport_tos": 0, 00:37:27.905 "nvme_error_stat": false, 00:37:27.905 "rdma_srq_size": 0, 00:37:27.905 "io_path_stat": false, 00:37:27.905 "allow_accel_sequence": false, 00:37:27.905 "rdma_max_cq_size": 0, 00:37:27.905 "rdma_cm_event_timeout_ms": 0, 00:37:27.905 "dhchap_digests": [ 00:37:27.905 "sha256", 00:37:27.905 "sha384", 00:37:27.905 "sha512" 00:37:27.905 ], 00:37:27.905 "dhchap_dhgroups": [ 00:37:27.905 "null", 00:37:27.905 "ffdhe2048", 00:37:27.905 "ffdhe3072", 00:37:27.905 "ffdhe4096", 00:37:27.905 "ffdhe6144", 00:37:27.905 "ffdhe8192" 00:37:27.905 ] 00:37:27.905 } 00:37:27.905 }, 00:37:27.905 { 00:37:27.905 "method": "bdev_nvme_attach_controller", 00:37:27.905 "params": { 00:37:27.905 "name": "nvme0", 00:37:27.905 "trtype": "TCP", 00:37:27.905 "adrfam": "IPv4", 00:37:27.905 "traddr": "127.0.0.1", 00:37:27.905 "trsvcid": "4420", 00:37:27.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:27.905 "prchk_reftag": false, 00:37:27.905 "prchk_guard": false, 00:37:27.905 "ctrlr_loss_timeout_sec": 0, 00:37:27.905 "reconnect_delay_sec": 0, 00:37:27.905 "fast_io_fail_timeout_sec": 0, 00:37:27.905 "psk": "key0", 00:37:27.905 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:27.905 "hdgst": false, 00:37:27.905 "ddgst": false, 00:37:27.905 "multipath": "multipath" 00:37:27.905 } 00:37:27.905 }, 00:37:27.905 { 00:37:27.905 "method": "bdev_nvme_set_hotplug", 00:37:27.905 "params": { 00:37:27.905 "period_us": 100000, 00:37:27.905 "enable": false 00:37:27.905 } 00:37:27.905 }, 00:37:27.905 { 00:37:27.905 "method": "bdev_wait_for_examine" 00:37:27.905 } 00:37:27.905 ] 00:37:27.905 }, 00:37:27.905 { 00:37:27.905 "subsystem": "nbd", 00:37:27.905 "config": [] 00:37:27.905 } 00:37:27.905 ] 00:37:27.905 }' 00:37:27.905 14:08:59 keyring_file -- keyring/file.sh@115 -- # killprocess 2434955 00:37:27.905 14:08:59 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2434955 ']' 00:37:27.905 14:08:59 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2434955 00:37:27.905 14:08:59 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:27.905 14:08:59 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:27.905 14:08:59 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2434955 00:37:27.905 14:08:59 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:27.905 14:08:59 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:27.905 14:08:59 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2434955' 00:37:27.905 killing process with pid 2434955 00:37:27.905 14:08:59 keyring_file -- common/autotest_common.sh@973 -- # kill 2434955 00:37:27.905 Received shutdown signal, test time was about 1.000000 seconds 00:37:27.905 00:37:27.905 Latency(us) 00:37:27.905 [2024-12-05T13:08:59.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.905 [2024-12-05T13:08:59.431Z] =================================================================================================================== 00:37:27.905 [2024-12-05T13:08:59.431Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:27.905 14:08:59 keyring_file -- common/autotest_common.sh@978 -- # wait 2434955 00:37:27.905 14:08:59 keyring_file -- keyring/file.sh@118 -- # bperfpid=2436538 00:37:27.905 14:08:59 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2436538 /var/tmp/bperf.sock 00:37:27.905 14:08:59 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2436538 ']' 00:37:27.905 14:08:59 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:27.905 14:08:59 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:27.905 14:08:59 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:27.905 14:08:59 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:27.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:27.905 14:08:59 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:27.905 "subsystems": [ 00:37:27.905 { 00:37:27.905 "subsystem": "keyring", 00:37:27.905 "config": [ 00:37:27.905 { 00:37:27.905 "method": "keyring_file_add_key", 00:37:27.905 "params": { 00:37:27.905 "name": "key0", 00:37:27.905 "path": "/tmp/tmp.uRFPcxikeU" 00:37:27.905 } 00:37:27.905 }, 00:37:27.905 { 00:37:27.905 "method": "keyring_file_add_key", 00:37:27.905 "params": { 00:37:27.905 "name": "key1", 00:37:27.905 "path": "/tmp/tmp.fq2Dfc0kLu" 00:37:27.905 } 00:37:27.905 } 00:37:27.905 ] 00:37:27.905 }, 00:37:27.905 { 00:37:27.905 "subsystem": "iobuf", 00:37:27.905 "config": [ 00:37:27.905 { 00:37:27.905 "method": "iobuf_set_options", 00:37:27.905 "params": { 00:37:27.905 "small_pool_count": 8192, 00:37:27.905 "large_pool_count": 1024, 00:37:27.905 "small_bufsize": 8192, 00:37:27.905 "large_bufsize": 135168, 00:37:27.905 "enable_numa": false 00:37:27.905 } 00:37:27.905 } 00:37:27.905 ] 00:37:27.905 }, 00:37:27.905 { 00:37:27.905 "subsystem": "sock", 00:37:27.905 "config": [ 00:37:27.905 { 00:37:27.905 "method": "sock_set_default_impl", 00:37:27.905 "params": { 00:37:27.905 "impl_name": "posix" 00:37:27.905 } 00:37:27.905 }, 00:37:27.905 { 00:37:27.905 "method": "sock_impl_set_options", 00:37:27.905 "params": { 00:37:27.905 "impl_name": "ssl", 00:37:27.906 "recv_buf_size": 4096, 00:37:27.906 "send_buf_size": 4096, 00:37:27.906 "enable_recv_pipe": true, 00:37:27.906 "enable_quickack": false, 00:37:27.906 "enable_placement_id": 0, 00:37:27.906 "enable_zerocopy_send_server": true, 00:37:27.906 "enable_zerocopy_send_client": false, 00:37:27.906 "zerocopy_threshold": 0, 00:37:27.906 "tls_version": 0, 00:37:27.906 "enable_ktls": false 00:37:27.906 } 00:37:27.906 }, 00:37:27.906 { 00:37:27.906 "method": "sock_impl_set_options", 00:37:27.906 "params": { 00:37:27.906 "impl_name": "posix", 00:37:27.906 "recv_buf_size": 2097152, 00:37:27.906 "send_buf_size": 2097152, 00:37:27.906 "enable_recv_pipe": true, 00:37:27.906 "enable_quickack": false, 00:37:27.906 "enable_placement_id": 0, 00:37:27.906 "enable_zerocopy_send_server": true, 00:37:27.906 "enable_zerocopy_send_client": false, 00:37:27.906 "zerocopy_threshold": 0, 00:37:27.906 "tls_version": 0, 00:37:27.906 "enable_ktls": false 00:37:27.906 } 00:37:27.906 } 00:37:27.906 ] 00:37:27.906 }, 00:37:27.906 { 00:37:27.906 "subsystem": "vmd", 00:37:27.906 "config": [] 00:37:27.906 }, 00:37:27.906 { 00:37:27.906 "subsystem": "accel", 00:37:27.906 "config": [ 00:37:27.906 { 00:37:27.906 "method": "accel_set_options", 00:37:27.906 "params": { 00:37:27.906 "small_cache_size": 128, 00:37:27.906 "large_cache_size": 16, 00:37:27.906 "task_count": 2048, 00:37:27.906 "sequence_count": 2048, 00:37:27.906 "buf_count": 2048 00:37:27.906 } 00:37:27.906 } 00:37:27.906 ] 00:37:27.906 }, 00:37:27.906 { 00:37:27.906 "subsystem": "bdev", 00:37:27.906 "config": [ 00:37:27.906 { 00:37:27.906 "method": "bdev_set_options", 00:37:27.906 "params": { 00:37:27.906 "bdev_io_pool_size": 65535, 00:37:27.906 "bdev_io_cache_size": 256, 00:37:27.906 "bdev_auto_examine": true, 00:37:27.906 "iobuf_small_cache_size": 128, 00:37:27.906 "iobuf_large_cache_size": 16 00:37:27.906 } 00:37:27.906 }, 00:37:27.906 { 00:37:27.906 "method": "bdev_raid_set_options", 00:37:27.906 "params": { 00:37:27.906 "process_window_size_kb": 1024, 00:37:27.906 "process_max_bandwidth_mb_sec": 0 00:37:27.906 } 00:37:27.906 }, 00:37:27.906 { 00:37:27.906 "method": "bdev_iscsi_set_options", 00:37:27.906 "params": { 00:37:27.906 "timeout_sec": 30 00:37:27.906 } 00:37:27.906 }, 00:37:27.906 { 00:37:27.906 "method": "bdev_nvme_set_options", 00:37:27.906 "params": { 00:37:27.906 "action_on_timeout": "none", 00:37:27.906 "timeout_us": 0, 00:37:27.906 "timeout_admin_us": 0, 00:37:27.906 "keep_alive_timeout_ms": 10000, 00:37:27.906 "arbitration_burst": 0, 00:37:27.906 "low_priority_weight": 0, 00:37:27.906 "medium_priority_weight": 0, 00:37:27.906 "high_priority_weight": 0, 00:37:27.906 "nvme_adminq_poll_period_us": 10000, 00:37:27.906 "nvme_ioq_poll_period_us": 0, 00:37:27.906 "io_queue_requests": 512, 00:37:27.906 "delay_cmd_submit": true, 00:37:27.906 "transport_retry_count": 4, 00:37:27.906 "bdev_retry_count": 3, 00:37:27.906 "transport_ack_timeout": 0, 00:37:27.906 "ctrlr_loss_timeout_sec": 0, 00:37:27.906 "reconnect_delay_sec": 0, 00:37:27.906 "fast_io_fail_timeout_sec": 0, 00:37:27.906 "disable_auto_failback": false, 00:37:27.906 "generate_uuids": false, 00:37:27.906 "transport_tos": 0, 00:37:27.906 "nvme_error_stat": false, 00:37:27.906 "rdma_srq_size": 0, 00:37:27.906 "io_path_stat": false, 00:37:27.906 "allow_accel_sequence": false, 00:37:27.906 "rdma_max_cq_size": 0, 00:37:27.906 "rdma_cm_event_timeout_ms": 0, 00:37:27.906 "dhchap_digests": [ 00:37:27.906 "sha256", 00:37:27.906 "sha384", 00:37:27.906 "sha512" 00:37:27.906 ], 00:37:27.906 "dhchap_dhgroups": [ 00:37:27.906 "null", 00:37:27.906 "ffdhe2048", 00:37:27.906 "ffdhe3072", 00:37:27.906 "ffdhe4096", 00:37:27.906 "ffdhe6144", 00:37:27.906 "ffdhe8192" 00:37:27.906 ] 00:37:27.906 } 00:37:27.906 }, 00:37:27.906 { 00:37:27.906 "method": "bdev_nvme_attach_controller", 00:37:27.906 "params": { 00:37:27.906 "name": "nvme0", 00:37:27.906 "trtype": "TCP", 00:37:27.906 "adrfam": "IPv4", 00:37:27.906 "traddr": "127.0.0.1", 00:37:27.906 "trsvcid": "4420", 00:37:27.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:27.906 "prchk_reftag": false, 00:37:27.906 "prchk_guard": false, 00:37:27.906 "ctrlr_loss_timeout_sec": 0, 00:37:27.906 "reconnect_delay_sec": 0, 00:37:27.906 "fast_io_fail_timeout_sec": 0, 00:37:27.906 "psk": "key0", 00:37:27.906 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:27.906 "hdgst": false, 00:37:27.906 "ddgst": false, 00:37:27.906 "multipath": "multipath" 00:37:27.906 } 00:37:27.906 }, 00:37:27.906 { 00:37:27.906 "method": "bdev_nvme_set_hotplug", 00:37:27.906 "params": { 00:37:27.906 "period_us": 100000, 00:37:27.906 "enable": false 00:37:27.906 } 00:37:27.906 }, 00:37:27.906 { 00:37:27.906 "method": "bdev_wait_for_examine" 00:37:27.906 } 00:37:27.906 ] 00:37:27.906 }, 00:37:27.906 { 00:37:27.906 "subsystem": "nbd", 00:37:27.906 "config": [] 00:37:27.906 } 00:37:27.906 ] 00:37:27.906 }' 00:37:27.906 14:08:59 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:27.906 14:08:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:28.166 [2024-12-05 14:08:59.474184] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:37:28.166 [2024-12-05 14:08:59.474268] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2436538 ] 00:37:28.166 [2024-12-05 14:08:59.539305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.166 [2024-12-05 14:08:59.595873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:28.425 [2024-12-05 14:08:59.787213] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:28.425 14:08:59 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:28.425 14:08:59 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:28.425 14:08:59 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:28.425 14:08:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.425 14:08:59 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:28.683 14:09:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:28.683 14:09:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:28.683 14:09:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:28.683 14:09:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:28.683 14:09:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.683 14:09:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:28.683 14:09:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:29.248 14:09:00 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:29.248 14:09:00 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:29.248 14:09:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:29.248 14:09:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:29.248 14:09:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:29.248 14:09:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:29.248 14:09:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:29.506 14:09:00 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:29.506 14:09:00 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:29.506 14:09:00 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:29.506 14:09:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:29.764 14:09:01 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:29.764 14:09:01 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:29.764 14:09:01 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.uRFPcxikeU /tmp/tmp.fq2Dfc0kLu 00:37:29.764 14:09:01 keyring_file -- keyring/file.sh@20 -- # killprocess 2436538 00:37:29.764 14:09:01 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2436538 ']' 00:37:29.764 14:09:01 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2436538 00:37:29.764 14:09:01 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:29.764 14:09:01 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:29.764 14:09:01 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2436538 00:37:29.764 14:09:01 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:29.764 14:09:01 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:29.764 14:09:01 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2436538' 00:37:29.764 killing process with pid 2436538 00:37:29.764 14:09:01 keyring_file -- common/autotest_common.sh@973 -- # kill 2436538 00:37:29.764 Received shutdown signal, test time was about 1.000000 seconds 00:37:29.764 00:37:29.764 Latency(us) 00:37:29.764 [2024-12-05T13:09:01.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:29.764 [2024-12-05T13:09:01.290Z] =================================================================================================================== 00:37:29.764 [2024-12-05T13:09:01.290Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:29.764 14:09:01 keyring_file -- common/autotest_common.sh@978 -- # wait 2436538 00:37:30.021 14:09:01 keyring_file -- keyring/file.sh@21 -- # killprocess 2434941 00:37:30.021 14:09:01 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2434941 ']' 00:37:30.021 14:09:01 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2434941 00:37:30.021 14:09:01 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:30.021 14:09:01 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:30.021 14:09:01 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2434941 00:37:30.021 14:09:01 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:30.021 14:09:01 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:30.021 14:09:01 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2434941' 00:37:30.021 killing process with pid 2434941 00:37:30.021 14:09:01 keyring_file -- common/autotest_common.sh@973 -- # kill 2434941 00:37:30.021 14:09:01 keyring_file -- common/autotest_common.sh@978 -- # wait 2434941 00:37:30.279 00:37:30.279 real 0m14.713s 00:37:30.279 user 0m37.630s 00:37:30.280 sys 0m3.192s 00:37:30.280 14:09:01 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:30.280 14:09:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:30.280 ************************************ 00:37:30.280 END TEST keyring_file 00:37:30.280 ************************************ 00:37:30.280 14:09:01 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:37:30.280 14:09:01 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:30.280 14:09:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:30.280 14:09:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:30.280 14:09:01 -- common/autotest_common.sh@10 -- # set +x 00:37:30.568 ************************************ 00:37:30.568 START TEST keyring_linux 00:37:30.568 ************************************ 00:37:30.568 14:09:01 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:30.568 Joined session keyring: 840630292 00:37:30.568 * Looking for test storage... 00:37:30.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:30.568 14:09:01 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:30.568 14:09:01 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:37:30.568 14:09:01 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:30.568 14:09:01 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:30.568 14:09:01 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:30.568 14:09:01 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:30.568 14:09:01 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:30.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.568 --rc genhtml_branch_coverage=1 00:37:30.568 --rc genhtml_function_coverage=1 00:37:30.568 --rc genhtml_legend=1 00:37:30.568 --rc geninfo_all_blocks=1 00:37:30.568 --rc geninfo_unexecuted_blocks=1 00:37:30.568 00:37:30.568 ' 00:37:30.569 14:09:01 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:30.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.569 --rc genhtml_branch_coverage=1 00:37:30.569 --rc genhtml_function_coverage=1 00:37:30.569 --rc genhtml_legend=1 00:37:30.569 --rc geninfo_all_blocks=1 00:37:30.569 --rc geninfo_unexecuted_blocks=1 00:37:30.569 00:37:30.569 ' 00:37:30.569 14:09:01 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:30.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.569 --rc genhtml_branch_coverage=1 00:37:30.569 --rc genhtml_function_coverage=1 00:37:30.569 --rc genhtml_legend=1 00:37:30.569 --rc geninfo_all_blocks=1 00:37:30.569 --rc geninfo_unexecuted_blocks=1 00:37:30.569 00:37:30.569 ' 00:37:30.569 14:09:01 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:30.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.569 --rc genhtml_branch_coverage=1 00:37:30.569 --rc genhtml_function_coverage=1 00:37:30.569 --rc genhtml_legend=1 00:37:30.569 --rc geninfo_all_blocks=1 00:37:30.569 --rc geninfo_unexecuted_blocks=1 00:37:30.569 00:37:30.569 ' 00:37:30.569 14:09:01 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:30.569 14:09:01 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:30.569 14:09:01 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:30.569 14:09:01 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:30.569 14:09:01 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:30.569 14:09:01 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:30.569 14:09:01 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.569 14:09:01 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.569 14:09:01 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.569 14:09:01 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:30.569 14:09:01 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:30.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:30.569 14:09:01 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:30.569 14:09:01 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:30.569 14:09:01 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:30.569 14:09:01 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:30.569 14:09:01 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:30.569 14:09:01 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:30.569 14:09:01 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:30.569 14:09:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:30.569 14:09:01 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:30.569 14:09:01 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:30.569 14:09:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:30.569 14:09:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:30.569 14:09:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:30.569 14:09:01 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:30.569 14:09:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:30.569 14:09:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:30.569 /tmp/:spdk-test:key0 00:37:30.569 14:09:02 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:30.569 14:09:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:30.569 14:09:02 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:30.569 14:09:02 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:30.569 14:09:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:30.569 14:09:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:30.570 14:09:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:30.570 14:09:02 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:30.570 14:09:02 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:30.570 14:09:02 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:30.570 14:09:02 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:30.570 14:09:02 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:30.570 14:09:02 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:30.570 14:09:02 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:30.570 14:09:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:30.570 /tmp/:spdk-test:key1 00:37:30.570 14:09:02 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2437016 00:37:30.570 14:09:02 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:30.570 14:09:02 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2437016 00:37:30.570 14:09:02 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2437016 ']' 00:37:30.570 14:09:02 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:30.570 14:09:02 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:30.570 14:09:02 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:30.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:30.570 14:09:02 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:30.570 14:09:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:30.886 [2024-12-05 14:09:02.098956] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:37:30.886 [2024-12-05 14:09:02.099069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2437016 ] 00:37:30.886 [2024-12-05 14:09:02.166024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.886 [2024-12-05 14:09:02.222288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:31.145 14:09:02 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:31.145 14:09:02 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:31.145 14:09:02 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:31.145 14:09:02 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.145 14:09:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:31.145 [2024-12-05 14:09:02.490022] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:31.145 null0 00:37:31.145 [2024-12-05 14:09:02.522075] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:31.145 [2024-12-05 14:09:02.522622] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:31.145 14:09:02 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.145 14:09:02 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:31.145 867150438 00:37:31.145 14:09:02 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:31.145 1042242394 00:37:31.145 14:09:02 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2437030 00:37:31.145 14:09:02 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:31.145 14:09:02 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2437030 /var/tmp/bperf.sock 00:37:31.145 14:09:02 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2437030 ']' 00:37:31.145 14:09:02 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:31.145 14:09:02 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:31.145 14:09:02 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:31.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:31.145 14:09:02 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:31.145 14:09:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:31.145 [2024-12-05 14:09:02.588507] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:37:31.145 [2024-12-05 14:09:02.588573] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2437030 ] 00:37:31.145 [2024-12-05 14:09:02.655479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:31.402 [2024-12-05 14:09:02.713671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:31.402 14:09:02 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:31.402 14:09:02 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:31.402 14:09:02 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:31.402 14:09:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:31.659 14:09:03 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:31.659 14:09:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:32.225 14:09:03 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:32.225 14:09:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:32.225 [2024-12-05 14:09:03.742758] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:32.482 nvme0n1 00:37:32.482 14:09:03 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:32.482 14:09:03 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:32.482 14:09:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:32.482 14:09:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:32.482 14:09:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:32.482 14:09:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.740 14:09:04 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:32.740 14:09:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:32.740 14:09:04 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:32.740 14:09:04 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:32.740 14:09:04 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:32.740 14:09:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.740 14:09:04 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:32.999 14:09:04 keyring_linux -- keyring/linux.sh@25 -- # sn=867150438 00:37:33.000 14:09:04 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:33.000 14:09:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:33.000 14:09:04 keyring_linux -- keyring/linux.sh@26 -- # [[ 867150438 == \8\6\7\1\5\0\4\3\8 ]] 00:37:33.000 14:09:04 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 867150438 00:37:33.000 14:09:04 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:33.000 14:09:04 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:33.000 Running I/O for 1 seconds... 00:37:34.380 11353.00 IOPS, 44.35 MiB/s 00:37:34.380 Latency(us) 00:37:34.380 [2024-12-05T13:09:05.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.380 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:34.380 nvme0n1 : 1.01 11362.93 44.39 0.00 0.00 11199.92 8883.77 22330.79 00:37:34.380 [2024-12-05T13:09:05.906Z] =================================================================================================================== 00:37:34.380 [2024-12-05T13:09:05.906Z] Total : 11362.93 44.39 0.00 0.00 11199.92 8883.77 22330.79 00:37:34.380 { 00:37:34.380 "results": [ 00:37:34.380 { 00:37:34.380 "job": "nvme0n1", 00:37:34.380 "core_mask": "0x2", 00:37:34.380 "workload": "randread", 00:37:34.380 "status": "finished", 00:37:34.380 "queue_depth": 128, 00:37:34.380 "io_size": 4096, 00:37:34.380 "runtime": 1.010479, 00:37:34.380 "iops": 11362.92787875849, 00:37:34.380 "mibps": 44.38643702640035, 00:37:34.380 "io_failed": 0, 00:37:34.380 "io_timeout": 0, 00:37:34.380 "avg_latency_us": 11199.920049030043, 00:37:34.380 "min_latency_us": 8883.76888888889, 00:37:34.380 "max_latency_us": 22330.785185185185 00:37:34.380 } 00:37:34.380 ], 00:37:34.380 "core_count": 1 00:37:34.380 } 00:37:34.380 14:09:05 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:34.380 14:09:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:34.380 14:09:05 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:34.380 14:09:05 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:34.380 14:09:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:34.380 14:09:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:34.380 14:09:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.380 14:09:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:34.637 14:09:06 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:34.637 14:09:06 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:34.637 14:09:06 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:34.637 14:09:06 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:34.637 14:09:06 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:37:34.637 14:09:06 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:34.637 14:09:06 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:34.637 14:09:06 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:34.637 14:09:06 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:34.637 14:09:06 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:34.637 14:09:06 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:34.637 14:09:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:34.896 [2024-12-05 14:09:06.386957] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:34.896 [2024-12-05 14:09:06.387939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e72e0 (107): Transport endpoint is not connected 00:37:34.896 [2024-12-05 14:09:06.388930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e72e0 (9): Bad file descriptor 00:37:34.896 [2024-12-05 14:09:06.389928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:34.896 [2024-12-05 14:09:06.389947] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:34.896 [2024-12-05 14:09:06.389976] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:34.896 [2024-12-05 14:09:06.389991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:34.896 request: 00:37:34.896 { 00:37:34.896 "name": "nvme0", 00:37:34.896 "trtype": "tcp", 00:37:34.896 "traddr": "127.0.0.1", 00:37:34.896 "adrfam": "ipv4", 00:37:34.896 "trsvcid": "4420", 00:37:34.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:34.896 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:34.896 "prchk_reftag": false, 00:37:34.896 "prchk_guard": false, 00:37:34.896 "hdgst": false, 00:37:34.896 "ddgst": false, 00:37:34.896 "psk": ":spdk-test:key1", 00:37:34.896 "allow_unrecognized_csi": false, 00:37:34.896 "method": "bdev_nvme_attach_controller", 00:37:34.896 "req_id": 1 00:37:34.896 } 00:37:34.896 Got JSON-RPC error response 00:37:34.896 response: 00:37:34.896 { 00:37:34.896 "code": -5, 00:37:34.896 "message": "Input/output error" 00:37:34.896 } 00:37:34.896 14:09:06 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:37:34.896 14:09:06 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:34.896 14:09:06 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:34.896 14:09:06 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:34.896 14:09:06 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:34.896 14:09:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:34.896 14:09:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:34.896 14:09:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:34.896 14:09:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:34.896 14:09:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:34.896 14:09:06 keyring_linux -- keyring/linux.sh@33 -- # sn=867150438 00:37:34.896 14:09:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 867150438 00:37:34.896 1 links removed 00:37:34.896 14:09:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:34.896 14:09:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:34.896 14:09:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:34.896 14:09:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:34.896 14:09:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:35.155 14:09:06 keyring_linux -- keyring/linux.sh@33 -- # sn=1042242394 00:37:35.155 14:09:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1042242394 00:37:35.155 1 links removed 00:37:35.155 14:09:06 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2437030 00:37:35.155 14:09:06 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2437030 ']' 00:37:35.155 14:09:06 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2437030 00:37:35.155 14:09:06 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:35.155 14:09:06 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:35.155 14:09:06 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2437030 00:37:35.155 14:09:06 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:35.155 14:09:06 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:35.155 14:09:06 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2437030' 00:37:35.155 killing process with pid 2437030 00:37:35.155 14:09:06 keyring_linux -- common/autotest_common.sh@973 -- # kill 2437030 00:37:35.155 Received shutdown signal, test time was about 1.000000 seconds 00:37:35.155 00:37:35.155 Latency(us) 00:37:35.155 [2024-12-05T13:09:06.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:35.156 [2024-12-05T13:09:06.682Z] =================================================================================================================== 00:37:35.156 [2024-12-05T13:09:06.682Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:35.156 14:09:06 keyring_linux -- common/autotest_common.sh@978 -- # wait 2437030 00:37:35.156 14:09:06 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2437016 00:37:35.156 14:09:06 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2437016 ']' 00:37:35.156 14:09:06 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2437016 00:37:35.156 14:09:06 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:35.415 14:09:06 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:35.415 14:09:06 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2437016 00:37:35.415 14:09:06 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:35.416 14:09:06 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:35.416 14:09:06 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2437016' 00:37:35.416 killing process with pid 2437016 00:37:35.416 14:09:06 keyring_linux -- common/autotest_common.sh@973 -- # kill 2437016 00:37:35.416 14:09:06 keyring_linux -- common/autotest_common.sh@978 -- # wait 2437016 00:37:35.674 00:37:35.674 real 0m5.327s 00:37:35.674 user 0m10.689s 00:37:35.674 sys 0m1.609s 00:37:35.674 14:09:07 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:35.674 14:09:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:35.674 ************************************ 00:37:35.674 END TEST keyring_linux 00:37:35.674 ************************************ 00:37:35.674 14:09:07 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:35.674 14:09:07 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:35.674 14:09:07 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:35.674 14:09:07 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:37:35.674 14:09:07 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:35.674 14:09:07 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:35.674 14:09:07 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:35.674 14:09:07 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:35.674 14:09:07 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:35.674 14:09:07 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:35.674 14:09:07 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:35.674 14:09:07 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:35.674 14:09:07 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:35.674 14:09:07 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:35.674 14:09:07 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:37:35.674 14:09:07 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:37:35.674 14:09:07 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:37:35.674 14:09:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:35.674 14:09:07 -- common/autotest_common.sh@10 -- # set +x 00:37:35.674 14:09:07 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:37:35.674 14:09:07 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:35.674 14:09:07 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:35.674 14:09:07 -- common/autotest_common.sh@10 -- # set +x 00:37:37.576 INFO: APP EXITING 00:37:37.576 INFO: killing all VMs 00:37:37.576 INFO: killing vhost app 00:37:37.576 INFO: EXIT DONE 00:37:38.953 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:38.953 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:38.953 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:38.953 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:38.953 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:38.953 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:38.953 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:38.953 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:38.953 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:37:38.953 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:38.953 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:38.953 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:38.953 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:38.953 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:38.953 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:38.953 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:38.953 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:40.329 Cleaning 00:37:40.329 Removing: /var/run/dpdk/spdk0/config 00:37:40.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:40.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:40.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:40.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:40.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:40.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:40.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:40.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:40.329 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:40.329 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:40.329 Removing: /var/run/dpdk/spdk1/config 00:37:40.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:40.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:40.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:40.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:40.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:40.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:40.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:40.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:40.329 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:40.329 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:40.329 Removing: /var/run/dpdk/spdk2/config 00:37:40.329 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:40.329 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:40.329 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:40.329 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:40.329 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:40.329 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:40.329 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:40.329 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:40.329 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:40.329 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:40.329 Removing: /var/run/dpdk/spdk3/config 00:37:40.329 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:40.329 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:40.329 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:40.329 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:40.329 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:40.329 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:40.329 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:40.329 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:40.329 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:40.329 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:40.329 Removing: /var/run/dpdk/spdk4/config 00:37:40.329 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:40.329 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:40.329 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:40.329 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:40.329 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:40.329 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:40.329 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:40.329 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:40.329 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:40.329 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:40.329 Removing: /dev/shm/bdev_svc_trace.1 00:37:40.329 Removing: /dev/shm/nvmf_trace.0 00:37:40.329 Removing: /dev/shm/spdk_tgt_trace.pid2116032 00:37:40.329 Removing: /var/run/dpdk/spdk0 00:37:40.329 Removing: /var/run/dpdk/spdk1 00:37:40.329 Removing: /var/run/dpdk/spdk2 00:37:40.329 Removing: /var/run/dpdk/spdk3 00:37:40.330 Removing: /var/run/dpdk/spdk4 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2114394 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2115132 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2116032 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2116410 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2117094 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2117233 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2117951 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2117961 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2118227 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2119539 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2120463 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2120777 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2120979 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2121187 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2121389 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2121551 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2121705 00:37:40.330 Removing: /var/run/dpdk/spdk_pid2121978 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2122208 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2124700 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2124862 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2125024 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2125040 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2125456 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2125467 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2125887 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2125901 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2126185 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2126202 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2126364 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2126495 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2126870 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2127031 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2127350 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2129466 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2132105 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2139847 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2140262 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2142781 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2142945 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2145586 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2149318 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2151503 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2157927 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2163170 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2164484 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2165157 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2176049 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2178475 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2205630 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2208926 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2213380 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2217652 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2217654 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2218311 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2218967 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2219507 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2219906 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2220030 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2220166 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2220302 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2220306 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2220967 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2221508 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2222160 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2222563 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2222574 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2222830 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2223719 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2224467 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2229829 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2257853 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2260853 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2261976 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2263863 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2264000 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2264137 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2264282 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2264727 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2266043 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2266898 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2267331 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2268844 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2269263 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2269827 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2272212 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2275512 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2275513 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2275514 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2277723 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2282571 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2285220 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2289129 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2290072 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2291040 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2292133 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2294919 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2298012 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2300463 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2304703 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2304708 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2307610 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2307744 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2307874 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2308146 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2308266 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2310922 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2311375 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2314046 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2315935 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2319453 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2322783 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2329286 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2333845 00:37:40.590 Removing: /var/run/dpdk/spdk_pid2333859 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2346761 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2347288 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2347702 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2348110 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2348693 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2349220 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2349635 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2350035 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2352545 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2352702 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2356609 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2356670 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2360026 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2362641 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2370180 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2370588 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2373091 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2373253 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2375878 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2379578 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2381730 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2388091 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2393324 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2394513 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2395179 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2405494 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2408246 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2410264 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2415209 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2415301 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2418213 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2419519 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2421035 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2421781 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2423312 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2424183 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2429498 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2429860 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2430248 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2431812 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2432208 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2432489 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2434941 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2434955 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2436538 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2437016 00:37:40.850 Removing: /var/run/dpdk/spdk_pid2437030 00:37:40.850 Clean 00:37:40.850 14:09:12 -- common/autotest_common.sh@1453 -- # return 0 00:37:40.850 14:09:12 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:40.850 14:09:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:40.850 14:09:12 -- common/autotest_common.sh@10 -- # set +x 00:37:40.850 14:09:12 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:40.850 14:09:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:40.850 14:09:12 -- common/autotest_common.sh@10 -- # set +x 00:37:40.850 14:09:12 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:40.850 14:09:12 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:40.850 14:09:12 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:40.850 14:09:12 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:40.850 14:09:12 -- spdk/autotest.sh@398 -- # hostname 00:37:40.850 14:09:12 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:41.109 geninfo: WARNING: invalid characters removed from testname! 00:38:13.169 14:09:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:15.694 14:09:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:18.971 14:09:50 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:22.245 14:09:53 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:24.788 14:09:56 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:28.071 14:09:59 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:31.440 14:10:02 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:31.440 14:10:02 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:31.440 14:10:02 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:38:31.440 14:10:02 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:31.440 14:10:02 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:31.440 14:10:02 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:31.440 + [[ -n 2043790 ]] 00:38:31.440 + sudo kill 2043790 00:38:31.450 [Pipeline] } 00:38:31.461 [Pipeline] // stage 00:38:31.465 [Pipeline] } 00:38:31.477 [Pipeline] // timeout 00:38:31.481 [Pipeline] } 00:38:31.493 [Pipeline] // catchError 00:38:31.500 [Pipeline] } 00:38:31.514 [Pipeline] // wrap 00:38:31.521 [Pipeline] } 00:38:31.534 [Pipeline] // catchError 00:38:31.542 [Pipeline] stage 00:38:31.544 [Pipeline] { (Epilogue) 00:38:31.555 [Pipeline] catchError 00:38:31.557 [Pipeline] { 00:38:31.570 [Pipeline] echo 00:38:31.571 Cleanup processes 00:38:31.577 [Pipeline] sh 00:38:31.863 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:31.863 2448193 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:31.879 [Pipeline] sh 00:38:32.165 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:32.165 ++ grep -v 'sudo pgrep' 00:38:32.165 ++ awk '{print $1}' 00:38:32.165 + sudo kill -9 00:38:32.165 + true 00:38:32.178 [Pipeline] sh 00:38:32.462 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:42.437 [Pipeline] sh 00:38:42.723 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:42.723 Artifacts sizes are good 00:38:42.738 [Pipeline] archiveArtifacts 00:38:42.744 Archiving artifacts 00:38:42.894 [Pipeline] sh 00:38:43.182 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:43.197 [Pipeline] cleanWs 00:38:43.207 [WS-CLEANUP] Deleting project workspace... 00:38:43.207 [WS-CLEANUP] Deferred wipeout is used... 00:38:43.214 [WS-CLEANUP] done 00:38:43.216 [Pipeline] } 00:38:43.232 [Pipeline] // catchError 00:38:43.243 [Pipeline] sh 00:38:43.545 + logger -p user.info -t JENKINS-CI 00:38:43.553 [Pipeline] } 00:38:43.567 [Pipeline] // stage 00:38:43.573 [Pipeline] } 00:38:43.589 [Pipeline] // node 00:38:43.593 [Pipeline] End of Pipeline 00:38:43.640 Finished: SUCCESS